Our Business Intelligence Architect master's course lets you gain proficiency in Business Intelligence. You will work on real-world projects in Informatica, Tableau, MSBI, Power BI, MS SQL, Data warehousing and Erwin, Azure Data Factory, SQL DBA and more. In this program, you will cover 9 courses and 32 industry-based projects.
Online Classroom Training
Self Paced Training
Intellipaat’s Business Intelligence Architect master’s course will provide you with in-depth knowledge on Business Intelligence and data warehousing. You will master how to design and develop enterprise class data warehouse and build reporting solution, SQL and do performance tuning in data warehouses. This program is especially designed by industry experts, and you will get 9 courses with 32 industry-based projects.
Online Instructor-led Courses:
There are no prerequisites for taking up this training program.
Today, there is an urgent need for Business Intelligence professionals who are well-versed with the front-end and back-end of BI. Due to this, Intellipaat master’s course in BI architecture has been created to help you gain complete proficiency in the ETL steps and in the BI reporting techniques. Taking this training will put you in a different league and help you grab top jobs.
Various types of databases, introduction to Structured Query Language, distinction between client server and file server databases, understanding SQL Server Management Studio, SQL Table basics, data types and functions, Transaction-SQL, authentication for Windows, data control language, and the identification of the keywords in T-SQL, such as Drop Table.
Data Anomalies, Update Anomalies, Insertion Anomalies, Deletion Anomalies, Types of Dependencies, Functional Dependency, Fully functional dependency, Partial functional dependency, Transitive functional dependency, Multi-valued functional dependency, Decomposition of tables, Lossy decomposition, Lossless decomposition, What is Normalization?, First Normal Form, Second Normal Form, Third Normal Form, Boyce-Codd Normal Form(BCNF), Fourth Normal Form, Entity-Relationship Model, Entity and Entity Set, Attributes and types of Attributes, Entity Sets, Relationship Sets, Degree of Relationship, Mapping Cardinalities, One-to-One, One-to-Many, Many-to-one, Many-to-many, Symbols used in E-R Notation.
Introduction to relational databases, fundamental concepts of relational rows, tables, and columns; several operators (such as logical and relational), constraints, domains, indexes, stored procedures, primary and foreign keys, understanding group functions, the unique key, etc.
Advanced concepts of SQL tables, SQL functions, operators & queries, table creation, data retrieval from tables, combining rows from tables using inner, outer, cross, and self joins, deploying operators such as ‘intersect,’ ‘except,’ ‘union,’ temporary table creation, set operator rules, table variables, etc.
Understanding SQL functions – what do they do?, scalar functions, aggregate functions, functions that can be used on different datasets, such as numbers, characters, strings, and dates, inline SQL functions, general functions, and duplicate functions.
Understanding SQL subqueries, their rules; statements and operators with which subqueries can be used, using the set clause to modify subqueries, understanding different types of subqueries, such as where, select, insert, update, delete, etc., and methods to create and view subqueries.
Learning SQL views, methods of creating, using, altering, renaming, dropping, and modifying views; understanding stored procedures and their key benefits, working with stored procedures, studying user-defined functions, and error handling.
User-defined functions; types of UDFs, such as scalar, inline table value, multi-statement table, stored procedures and when to deploy them, what is rank function?, triggers, and when to execute triggers?
SQL Server Management Studio, using pivot in MS Excel and MS SQL Server, differentiating between Char, Varchar, and NVarchar, XL path, indexes and their creation, records grouping, advantages, searching, sorting, modifying data; clustered indexes creation, use of indexes to cover queries, common table expressions, and index guidelines.
Creating Transact-SQL queries, querying multiple tables using joins, implementing functions and aggregating data, modifying data, determining the results of DDL statements on supplied tables and data, and constructing DML statements using the output statement.
Querying data using subqueries and APPLY, querying data using table expressions, grouping and pivoting data using queries, querying temporal data and non-relational data, constructing recursive table expressions to meet business requirements, and using windowing functions to group and rank the results of a query.
Creating database programmability objects by using T-SQL, implementing error handling and transactions, implementing transaction control in conjunction with error handling in stored procedures, and implementing data types and NULL.
Designing and implementing relational database schema; designing and implementing indexes, learning to compare between indexed and included columns, implementing clustered index, and designing and deploying views and column store views.
Explaining foreign key constraints, using T-SQL statements, usage of Data Manipulation Language (DML), designing the components of stored procedures, implementing input and output parameters, applying error handling, executing control logic in stored procedures, and designing trigger logic, DDL triggers, etc.
Applying transactions, using the transaction behavior to identify DML statements, learning about implicit and explicit transactions, isolation levels management, understanding concurrency and locking behavior, and using memory-optimized tables.
Accuracy of statistics, formulating statistics maintenance tasks, dynamic management objects management, identifying missing indexes, examining and troubleshooting query plans, consolidating the overlapping indexes, the performance management of database instances, and SQL server performance monitoring.
Corelated Subquery, Grouping Sets, Rollup, Cube
Implementing Corelated Subqueries, Using EXISTS with a Correlated subquery, Using Union Query, Using Grouping Set Query, Using Rollup, Using CUBE to generate four grouping sets, Perform a partial CUBE.
Project 1: Writing Complex Subqueries
Problem Statement: How to create subqueries using SQL?
Topics: This project will give you hands-on experience in working with SQL subqueries and utilizing them in various scenarios. Some of the subqueries that you will be working with and gaining hands-on experience in are: IN or NOT IN, ANY or ALL, EXISTS or NOT EXISTS, and other major queries.
Project 2: Querying a Large Relational Database
Problem Statement: How to get details about customers by querying the database?
Topics: In this project, you will work on downloading a database and restoring it on the server. You will then query the database to get customer details like name, phone number, email ID, sales made in a particular month, increase in month-on-month sales, and even the total sales made to a particular customer.
Project 3: Relational Database Design
Problem Statement: How to convert a relational design into a table in SQL Server?
Topics: In this project, you will work on converting a relational design that has enlisted within it various users, user roles, user accounts, and their statuses. You will create a table in SQL Server and insert data into it. With at least two rows in each of the tables, you will ensure that you have created respective foreign keys.
What is data visualization?, comparison and benefits against reading raw numbers, real use cases from various business domains, some quick and powerful examples using Tableau without going into the technical details of Tableau, installing Tableau, Tableau interface, connecting to DataSource, Tableau data types, and data preparation.
Installation of Tableau Desktop, architecture of Tableau, interface of Tableau (Layout, Toolbars, Data Pane, Analytics Pane, etc.) how to start with Tableau, and the ways to share and export the work done in Tableau.
Hands-on Exercise: Play with Tableau desktop, learn about the interface, and share and export existing works.
Connection to Excel, cubes and PDFs, management of metadata and extracts, data preparation, Joins (Left, Right, Inner, and Outer) and Union, dealing with NULL values, cross-database joining, data extraction, data blending, refresh extraction, incremental extraction, how to build extract , etc.
Hands-on Exercise: Connect to Excel sheet to import data, use metadata and extracts, manage NULL values, clean up data before using, perform the join techniques, execute data blending from multiple sources , etc.
Mark, highlight, sort, group, and use sets (creating and editing sets, IN/OUT, sets in hierarchies), constant sets, computed sets, bins, etc.
Hands-on Exercise: Use marks to create and edit sets, highlight the desired items, make groups, apply sorting on results, and make hierarchies among the created sets.
Filters (addition and removal), filtering continuous dates, dimensions, and measures, interactive filters, marks card, hierarchies, how to create folders in Tableau, sorting in Tableau, types of sorting, filtering in Tableau, types of filters, filtering the order of operations, etc.
Hands-on Exercise: Use the data set by date/dimensions/measures to add filter, use interactive filter to view the data, customize/remove filters to view the result, etc.
Using Formatting Pane to work with menu, fonts, alignments, settings, and copy-paste; formatting data using labels and tooltips, edit axes and annotations, k-means cluster analysis, trend and reference lines, visual analytics in Tableau, forecasting, confidence interval, reference lines, and bands.
Hands-on Exercise: Apply labels and tooltips to graphs, annotations, edit axes’ attributes, set the reference line, and perform k-means cluster analysis on the given dataset.
Working on coordinate points, plotting longitude and latitude, editing unrecognized locations, customizing geocoding, polygon maps, WMS: web mapping services, working on the background image, including add image, plotting points on images and generating coordinates from them; map visualization, custom territories, map box, WMS map; how to create map projects in Tableau, creating dual axes maps, and editing locations.
Hands-on Exercise: Plot longitude and latitude on a geo map, edit locations on the geo map, custom geocoding, use images of the map and plot points, find coordinates, create a polygon map, and use WMS.
Calculation syntax and functions in Tableau, various types of calculations, including Table, String, Date, Aggregate, Logic, and Number; LOD expressions, including concept and syntax; aggregation and replication with LOD expressions, nested LOD expressions; levels of details: fixed level, lower level, and higher level; quick table calculations, the creation of calculated fields, predefined calculations, and how to validate.
Creating parameters, parameters in calculations, using parameters with filters, column selection parameters, chart selection parameters, how to use parameters in the filter session, how to use parameters in calculated fields, how to use parameters in reference line, etc.
Hands-on Exercise: Creating new parameters to apply on a filter, passing parameters to filters to select columns, passing parameters to filters to select charts, etc.
Dual axes graphs, histograms: single and dual axes; box plot; charts: motion, Pareto, funnel, pie, bar, line, bubble, bullet, scatter, and waterfall charts; maps: tree and heat maps; market basket analysis (MBA), using Show me; and text table and highlighted table.
Hands-on Exercise: Plot a histogram, tree map, heat map, funnel chart, and more using the given dataset and also perform market basket analysis (MBA) on the same dataset.
Building and formatting a dashboard using size, objects, views, filters, and legends; best practices for making creative as well as interactive dashboards using the actions; creating stories, including the intro of story points; creating as well as updating the story points, adding catchy visuals in stories, adding annotations with descriptions; dashboards and stories: what is dashboard?, highlight actions, URL actions, and filter actions, selecting and clearing values, best practices to create dashboards, dashboard examples; using Tableau workspace and Tableau interface; learning about Tableau joins, types of joins; Tableau field types, saving as well as publishing data source, live vs extract connection, and various file types.
Hands-on Exercise: Create a Tableau dashboard view, include legends, objects, and filters, make the dashboard interactive, and use visual effects, annotations, and description s to create and edit a story.
Introduction to Tableau Prep, how Tableau Prep helps quickly combine join, shape, and clean data for analysis, creation of smart examples with Tableau Prep, getting deeper insights into the data with great visual experience, making data preparation simpler and accessible, integrating Tableau Prep with Tableau analytical workflow, and understanding the seamless process from data preparation to analysis with Tableau Prep.
Introduction to R language, applications and use cases of R, deploying R on the Tableau platform, learning R functions in Tableau, and the integration of Tableau with Hadoop.
Hands-on Exercise: Deploy R on Tableau, create a line graph using R interface, and also connect Tableau with Hadoop to extract data.
Project 1 : Analyzing global COVID-19 data with interactive Tableau dashboard
Domain : Healthcare, COVID-19
Problem statement : Analyzing, understanding and comparing the COVID-19 cases across different countries
Description : In this project, you will be working on two data sets having country wise information which includes the number of confirmed cases, number of death cases, number of new cases and deaths per day. Data sets are to be related, joined or blended with each other to proceed with this dashboard. You will apply filters, parameters, actions and calculations wherever necessary to get the desired results according to the problem statements mentioned in the project work. Depending on factors such as the fields used to visualize, number of values in each field and the problem statement, appropriate charts and graphs are to be used. The final dashboard should be interactive in nature, allowing users to interactive and analyze data as per their requirement.
Project 2 : Tableau dashboard for analyzing the UK bank customer data.
Domain : Bank customer data
Problem statement : Understanding the Region wise customer details in the UK bank data set provided
Description : In this project, you will be working on this bank data which has the region wise customer details with their respective job classifications, gender-age details and their balances maintained in the bank. You will be creating pie charts, donut pie charts, asymmetric drill downs and motion charts for an insightful visualization. A day wise forecast on balance is to be calculated using Exponential smoothing – an inbuilt forecasting tool in Tableau. Final dashboard should be interactive with filters and highlighters used.
Project 3 : Tableau dashboard for analyzing the Financial data
Domain : Retail, Finance
Problem statement : Analyzing the country wise product data to understand the key performance indicators such as sales and profit to improvise the performance and sales of the products
Description : In this project, you will be analyzing the country wise sales and profit for each of its segments and products. World maps are to be used for an interactive analysis with detailed tool tips. Country maps are displayed using interactive filters. Motion charts and customized shapes are used for enhancing visualizations. Annotations and drop lines are inserted wherever necessary. Phone and tablet layouts are added for enabling mobility of dashboards after publishing. Analyzing the outliers for each country is a major problem statement in this project.
Project 4 : Tableau dashboard for understanding the agricultural data
Domain : Agriculture
Problem statement : Agricultural Area, yield and production wise analysis per state
Description : In this project, you will have to analyze and understand data corresponding to a few states of India. Various seasonal crop categories and respective crops’ details under each category are provided for detailed analysis. Interactive drill down tree maps are to be used for insightful visualizations. Ranking crops based on their yeild value per year, seasonal pie charts with production details, district wise charts are a few of the requirements and problem statements of this project.
Introduction to Microsoft Power BI, the key features of Power BI workflow, Desktop application, BI service, and file data sources, sourcing data from web (OData, Azure), building dashboard, data visualization, publishing to cloud, DAX data computation, row context, filter context, Analytics Pane, creating columns and measures, data drill down and drill up, creating tables, binned tables, data modeling and relationships, the Power BI components like Power View, Map, Query, Pivot, Power Q & A, understanding advanced visualization.
Hands-on Exercise – Demo of building a Power BI dashboard, Source data from web, Publish to cloud, Create power tables
Learning about Power Query for self-service ETL functionalities, introduction to data mashup, working with Excel data, learning about Power BI Personal Gateway, extracting data from files, folders and databases, working with Azure SQL database and database source, connecting to Analysis Services, SaaS functionalities of Power BI.
Hands-on Exercise – Connect to a database, Import data from an excel file, Connect to SQL Server, Analysis Service, Connect to Power Query, Connect to SQL Azure, Connect to Hadoop
Installing Power BI, the various requirements and configuration settings, the Power Query, introduction to Query Editor, data transformation – column, row, text, data type, adding & filling columns and number column, column formatting, transpose table, appending, splitting, formatting data, Pivot and UnPivot, Merge Join, relational operators, date, time calculations, working with M functions, lists, records, tables, data types, and generators, Filters & Slicers, Index and Conditional Columns, Summary Tables, writing custom functions and error handling, M advanced data transformations.
Hands-on Exercise – Install PowerBI Desktop and configure the settings, Use Query editor, Write a power query, Transpose a table
Introduction to Power Pivot, learning about the xVelocity engine, advantages of Power Pivot, various versions and relationships, strongly typed datasets, Data Analysis Expressions, Measures, Calculated Members, Row, Filter & Evaluation Context, Context Interactions, Context over Relations, Schema Relations, learning about Table, Information, Logical, Text, Iterator, Table, and Time Intelligence Functions, Cumulative Charts, Calculated Tables, ranking and rank over groups, Power Pivot advanced functionalities, date and time functions, DAX advanced features, embedding Power Pivot in Power BI Desktop.
Hands-on Exercise – Create a Power Pivot Apply filters, Use advanced functionalities like date and time functions, Embed Power Pivot in Power BI Desktop, Create DAX queries for calculate column, tables and measures
Deep dive into Power BI data visualization, understanding Power View and Power Map, Power BI Desktop visualization, formatting and customizing visuals, visualization interaction, SandDance visualization, deploying Power View on SharePoint and Excel, top down and bottom up analytics, comparing volume and value-based analytics, working with Power View to create Reports, Charts, Scorecards, and other visually rich formats, categorizing, filtering and sorting data using Power View, Hierarchies, mastering the best practices, Custom Visualization, Authenticate a Power BI web application, Embedding dashboards in applications
Hands-on Exercise – Create a Power View and a Power Map, Format and customize visuals, Deploy Power View on SharePoint and Excel, Implement top-down and bottom-up analytics, Create Power View reports, Charts, Scorecards, Add a custom visual to report, Authenticate a Power BI web application, Embed dashboards in applications, Categorize, filter and sort data using Power View, Create hierarchies, Use date hierarchies, use business hierarchies, resolve hierarchy issues
Introduction to Power Q & A, intuitive tool to answer tough queries using natural language, getting answers in the form of charts, graphs and data discovery methodologies, ad hoc analytics building, Power Q & A best practices, integrating with SaaS applications
Hands-on Exercise – Write queries using natural language, Get answers in the form of charts, graphs, Build ad hoc analytics, Pin a tile and a range to dashboard
Getting to understand the Power BI Desktop, aggregating data from multiple data sources, how Power Query works in Power BI Desktop environment, learning about data modeling and data relationships, deploying data gateways, scheduling data refresh, managing groups and row level security, datasets, reports and dashboards, working with calculated measures, Power Pivot on Power BI Desktop ecosystem, mastering data visualization, Power View on Power BI Desktop, creating real world solutions using Power BI
Hands-on Exercise – Configure security for dashboard Deploy data gateways, Aggregate data from multiple data sources, Schedule data refresh Manage groups and row level security, datasets, reports and dashboards, Work with calculated measures
Analyzing Data with Power BI.
In this project, You have to create a Power BI Report. You will get hands-on experience on Tabular Visualization, Matrix Visualization, Funnel chart, Pie chart, Scatter plot, Sand dance plot
Project: In United States, there are many stores in which a survey was conducted based on students i.e. How much they are spending on different kinds of purchases like Video games, Indoor games, Toys, Books, Gadgets, etc. Create a Power BI Report to demonstrate Tabular Visualization, Matrix Visualization, Funnel Chart, pie chart, scatter plot, sand dance plot. Also Restrict data access for the given users in User mapping table. Publish the report on Power BI cloud service and Design the Master Dashboard consisting of Funnel chart and scatter plots. Then create a schedule refresh for six times in every 4 hours for the Dashboard in a day.
Topics: Basic Calculation using DAX, Data Transformations, Advanced Visualisations, Advanced features of Power BI Cloud Service, Context, Gateway and Schedule Refresh, Creation of Report, Sand-Dance and Percentile, Row Level Security
Case Study 1: Power BI Desktop, Cloud Service and End to End Workflow
Problem Statement: Design dashboard with basic set of visualizations and deploy to Power BI Cloud Service. Show top level brief overview of Transport Corp data using aggregated KPIs, Trends, Geo Distributions and Filters.
Topics: Creation of Report, Publishing to Cloud
Case Study 2: Visualizations, Configuring Extended Properties and Data Calculations DAX – Introduction
Problem Statement: Design dashboard to make use of Power BI DAX formulas to perform calculations. Analyze scheduled deliveries of loads. Use correlations across measures. Implement drill downs and reference lines.
Topics: DAX Introduction
Case Study 3: Combination Visualizations for Correlated Value Columns
Problem Statement: Design dashboard to make use of Power BI DAX formulas and perform calculations. Create Bucketed Categories to represent value measures on categories axis. Use scatter plot to identify outliers or outperformers.
Topics: Basic Calculation using DAX, Measures on Category Axis
Case Study 4: Data Transformations
Problem Statement: Design an audit dashboard. Make use of Power Query. Use Query Editor to perform data modeling by apply transformations. Manage relationships
Topics: Data Transformations
Case Study 5: Data Transformations -Cont.
Problem Statement: Design a dashboard to analyze the trend of admissions into state universities. Use Query Editor to perform data modeling by apply transformations like append data, split data, column Formatting, fill columns, transpose table, pivot/unpivot, merge join, conditional columns, index columns, summary tables.
Topics: Designing a Dashboard, Transformations
Case Study 6: Advanced Visualizations
Problem Statement: Design a dashboard to analyze the trend of admissions into state universities. Use expressions and filters to build custom visualizations.
Topics: Advanced Visualizations
Case Study 7: Advanced features of Power BI Cloud Service
Problem Statement: Knowledge check on Ad hoc analytics with Power BI Q&A, Dashboard Notifications and Alerts, get Data from Google Analytics and Customize Pre Loaded Visualizations
Topics: Advanced features of Power BI Cloud Service
Case Study 8: Advanced features of Power BI Desktop Client
Problem Statement: Knowledge check on Advanced features of Power BI Desktop Client (Integrate Custom Visualizations and Create Sand Dance Visualization).
Topics: Advanced features of Power BI Desktop Client
Case Study 9: Top Down and Bottoms Up Analysis to identify Shipping Costs Leakages
Problem Statement: Build a set of visualizations to identify underlying outliers and flip the same set of visualizations to perform bottoms up analysis.
Topics: Power Bi Dashboard(Top Down and Bottom Up Analysis)
Case Study 10: Value & Volume based analysis on hospital records to analyze Charges vs. Patients Inflow
Problem Statement: Build a set of visualizations linked to dynamic measures to flip analytics on user demand
Topics: Value and Volume based analysis
What is data warehousing, understanding the extract, transform and load processes, what is data aggregation, data scrubbing and data cleansing and the importance of Informatica PowerCenter ETL
Configuring the Informatica tool and how to install the Informatica operational administration activities and integration services
Hands-on Exercise: Step-by-step process for the installation of Informatica PowerCenter
Understanding the difference between active and passive transformations and the highlights of each transformation
Learning about expression transformation and connected passive transformation to calculate value on a single row
Hands-on Exercise: Calculate value on a single row using connected passive transformation
Different types of transformations like sorter, sequence generator and filter, the characteristics of each and where they are used
Hands-on Exercise: Transform data using the filter technique, use a sequence generator and use a sorter
Working with joiner transformation to bring data from heterogeneous data sources
Hands-on Exercise: Use joiner transformation to bring data from heterogeneous data sources
Understanding the ranking and union transformation, the characteristics and deployment
Hands-on Exercise: Perform ranking and union transformation
Learn the rank and dense rank functions and the syntax for them
Hands-on Exercise: Perform rank and dense rank functions
Understanding how router transformation works and its key features
Hands-on Exercise: Perform router transformation
Lookup transformation overview and different types of lookup transformations: connected, unconnected, dynamic and static
Hands-on Exercise: Perform lookup transformations: connected, unconnected, dynamic and static
What is SCD, processing in xml, learn how to handle a flat file, list and define various transformations, implement ‘for loop’ in PowerCenter, the concepts of pushdown optimization and partitioning, what is constraint-based loading and what is incremental aggregation
Hands-on Exercise: Load data from a flat file, implement ‘for loop’ in PowerCenter, use pushdown optimization and partitioning, do constraint-based data loading and use incremental aggregation technique to aggregate data
Different types of designers: Mapplet and Worklet, target load plan, loading to multiple targets and linking property
Hands-on Exercise: Create a mapplet and a worklet, plan a target load and load multiple targets
Objectives of performance tuning, defining performance tuning and learning the sequence for tuning
Hands-on Exercise: Do performance tuning by following different techniques
Managing repository, Repository Manager: the client tool, functionalities of previous versions and important tasks in Repository Manager
Hands-on Exercise: Manage tasks in Repository Manager
Understanding and adopting best practices for managing repository
Common tasks in workflow manager, creating dependencies and the scope of workflow monitor
Hands-on Exercise: Create workflow with dependencies of nodes
Define the variable and parameter in Informatica, parameter files and their scope, the parameter of mapping, worklet and session parameters, workflow and service variables and basic development errors
Hands-on Exercise: Define variables and parameters in functions, use the parameter of mapping, use worklet and session parameters and use workflow and service variables
Session and workflow log, using debuggers, error-handling framework in Informatica and failover and high availability in Informatica
Hands-on Exercise: Debug development errors, read workflow logs and use the error-handling framework
Configurations and mechanisms in recovery and checking health of PowerCenter environment
Hands-on Exercise: Configure recovery options and check health of PowerCenter environment
Using commands: infacmd, pmrep and infasetup and processing of a flat file
Hands-on Exercise: Use commands: infacmd, pmrep and infasetup
Fixed length and delimited, expression transformations: sequence numbers and dynamic targeting using transaction control
Hands-on Exercise: Perform expression transformations: sequence numbers and dynamic targeting using transaction control
Dynamic target with the use of transaction control and indirect loading
Hands-on Exercise: Use of transaction control with dynamic target and indirect loading
Importance of Java transformations to extend PowerCenter capabilities, transforming data and active and passive mode
Hands-on Exercise: Use Java transformations to extend PowerCenter capabilities
Understanding the unconnected stored procedure in Informatica and different scenarios of unconnected stored procedure usage
Hands-on Exercise: Use the unconnected stored procedure in Informatica in different scenarios
Using SQL transformation (active and passive)
Hands-on Exercise: Use SQL transformation (active and passive)
Understanding incremental loading and aggregation and comparison between them
Hands-on Exercise: Do incremental loading and aggregation
Working with database constraints using PowerCenter and understanding constraint-based loading and target load order
Hands-on Exercise: Perform constraint-based loading in a given order
Various types of XML transformation in Informatica and configuring a lookup as active
Hands-on Exercise: Perform XML transformation and configure a lookup as active
Understanding what data profiling in Informatica is, its significance in validating content and ensuring quality and structure of data as per business requirements
Hands-on Exercise: Create data profiling in Informatica and validate the content
Understanding workflow as a group of instructions/commands for integration services and learning how to create and delete workflow in Informatica
Hands-on Exercise: Create and delete workflow in Informatica
Understanding the database connection, creating a new database connection in Informatica and understanding various steps involved
Hands-on Exercise: Create a new database connection in Informatica
Working with relational database tables in Informatica, mapping for loading data from flat files to relational database files
Hands-on Exercise: Create mapping for loading data from flat files to relational database files
Understanding how to deploy PowerCenter for seamless LinkedIn connectivity with Informatica PowerCenter
Hands-on Exercise: Deploy PowerCenter for seamless LinkedIn connectivity with Informatica PowerCenter
Connecting Informatica PowerCenter with various data sources like social media channels such as Facebook, Twitter, etc.
Hands-on Exercise: Connect Informatica PowerCenter with various data sources like social media channels such as Facebook, Twitter, etc.
Pushdown optimization for load-balancing on the server for better performance and various types of partitioning for optimizing performance
Hands-on Exercise: Optimize using pushdown technique for load-balancing on the server for better performance and create various types of partitioning for optimizing performance
Understanding session cache, the importance of cache creation, implementing session cache and calculating cache requirement
Hands-on Exercise: Implement cache creation and work with session cache
Project 1: Admin Console
Problem Statement:It includes following actions:
Project 2: Deploying Informatica ETL for Business Intelligence
Problem Statement: Disparate data needs to be converted into insights using Informatica
Topics: In this Informatica project, you have access to all environments like dev, QA, UAT and production. You will first configure all the repositories in various environments. You will receive the requirement from client through source to target mapping sheet. You will extract data from various source systems and fetch it into staging. From staging, it will go to the operational data store; from there, the data will go to the enterprise data warehouse, and from there it will be directly deployed for generating reports and deriving business insights.
Project 3: Deploying the ETL Transactions on Healthcare Data
Problem Statement: How to systematically load data within a hospital scenario so that it is easily available
Topics: In this Clinical Research Data Warehouse (CRDW) Informatica project, you will be working on various types of data coming from diverse sources. The warehouse contains remitted claims that are both approved or disapproved for end-user reporting. You will create CRDW load schedules that are on daily, weekly and monthly bases.
Case Study 1
Project:Banking Products Augmentation
Problem Statement: How to improve the profits of a bank by customizing the products and adding new products based on customer needs
Topics:In this Informatica case study, you will construct a multidimensional model for the bank. You will create a set of diagrams depicting the star-join schemas needed to streamline the products as per customer requirements. You will implement slowly changing dimensions, understand the customer–account relationships and create diagram for the description of the hierarchies. You will also recommend heterogeneous products for the customers of the bank.
Case Study 2
Project:Employee Data Integration
Problem Statement:How to load a table with employee data using Informatica
Topics:In this Informatica case study, you will create a design for a common framework that can be used for loading and updating the employee ID and other details lookup for multiple shared tables. Your design will address the regular loading of shared tables. You will also keep a track of when the regular load runs, when the lookup requests run, prioritization of requests if needed and so on.
1.1 Document data stores
1.2 Columnar data stores
1.3 Key/value data stores
1.4 Graph data stores
1.5 Time series data stores
1.6 Object data stores
1.7 External index
1.8 Why NoSQL or Non-Relational DB?
1.9 When to Choose NoSQL or Non-Relational DB?
1.10 Azure Data Lake Storage
2.1 Data Lake Key Concepts
2.2 Azure Cosmos DB
2.3 Why Azure Cosmos DB?
2.4 Azure Blob Storage
2.5 Why Azure Blob Storage?
2.6 Data Partitioning
2.7 Why Partitioning Data?
2.8 Consistency Levels in AzureCosmos DB
1. Load Data fromAmazonS3 to ADLS Gen2 with Data Factory
2. Working with Azure Cosmos DB
3.1 Introduction to Relational Data Stores
3.2 Azure SQL Database
1. Create a Single Database Using Azure Portal
2. Create a managed instance
3. Create an elastic pool
3.3 Why SQL Database Elastic Pool?
1. Create a SQL virtual machine
2. Configure active geo-replication for Azure SQL Database in the Azure portal and initiate failover.
4.1 Azure SQL Security Capabilities
4.2 High-Availability and Azure SQL Database
4.3 Azure Database for MySQL
1. Design an Azure Database for MySQL database using the Azure portal
2. Connect using MySQL Workbench
4.4 Azure Database for PostgreSQL
1. Design an Azure Database for PostgreSQL – Single Server
4.5 Azure Database For MariaDB
1. Create an Azure Database for MariaDB server by using the Azure portal
4.6 What is PolyBase?
4.7 What is Azure Synapse Analytics (formerly SQL DW)?
1. Import Data From Blob Storage to Azure Synapse Analytics by Using PolyBase
5.1 What is Azure Batch?
5.2 Intrinsically Parallel Workloads
5.3 Tightly Coupled Workloads
5.4 Additional Batch Capabilities
5.5 Working of Azure Batch
1. Run a batch job using Azure Portal
2. Parallel File Processing with Azure Bath using the .NET API
3. Render a Blender Scene using Batch Explorer
4. Parallel R Simulation with Azure Batch
6.1 Flow Process of Data Factory
6.2 Why Azure Data Factory
6.3 Integration Runtime in Azure Data Factory
6.4 Mapping Data Flows
1. Transform data using Mapping data flows
7.1 What is Azure Databricks?
7.2 Azure Spark-based Analytics Platform
7.3 Apache Spark in Azure Databricks
1. Run a Spark Job on Azure Databricks using the Azure portal
2. ETL Operation by using Azure Databricks
3. Stream data into Azure Databricks using Event Hubs
8.1 Working of Stream Analytics
8.2 Key capabilities and benefits
1. Analyse phone call data with stream analytics and visualize results in Power BI dashboard
8.3 Stream Analytics Windowing Functions
9.1 What is Azure Monitor?
9.2 What data does Azure Monitor collect?
9.3 What can you Monitor?
9.4 Alerts in Azure
1. Create, View, and Manage Metric alerts using Azure Monitor
2. Monitor your Azure Data Factory Pipelines proactively with Alerts
9.5 Azure Security Logging & Auditing
1. Azure SQL Database Auditing
In this Azure Data Factory Project, you are supposed to automate the transformation of the real-time video list from the YouTube channel. You will be storing multiple files at the dynamic location of Azure Data Lake Store and the same needs to transformed and copied to any data store. The list of the channels should be displayed on PowerBI dynamically.
Project 01: Fetch the list of videos from the attached dataset of YouTube channel with the highest views and likes to promote advertisements on the channel which has maximum traffic.
Topics: Azure Data Factory, Azure Data Lake, Triggers, SQL, Power BI
1.1 Creating Azure Data Factory
1.2 Creating Pipelines
1.3 Creating a trigger that runs a pipeline on a schedule
1.4 Transforming data using SQL
1.5 Connecting Azure Data Lake to Power BI
Project 02: Working with Azure Data Factory, Data Lake and Azure SQL
Problem Statement: You are working as an Azure Architect for Zendrix Corp. This company is a service-based company and has its major revenue from the sales it makes for its subscription-based service.
The company needs to continuously monitor its lead flow from different countries. This helps them in strategizing how much they need to invest in Ad-Marketing for a particular country, this, in turn, helps them to achieve their desired sales targets.
Currently, the company has to manually synchronize data from their live SQL database to their BI tool, for checking the lead flow from different countries.
The company wants an automated solution, using which they will be able to see a live dashboard of the lead count. You as an Architect have suggested the following things:
2.1 Use of Power BI Heat maps
2.2 Use of Azure SQL instead of On-Premise SQL
2.3 Use of Data Factory to automate the data lifecycle from SQL to the BI tool.
Help them achieve the above goals.
Project 03: Identify the videos that get maximum traffic in selected YouTube channels
Problem Statement: Getting the real-time list of maximum traffic fetching videos from YouTube channels to promote advertisements in the same channels (traffic should be considered on a weekly basis)
Description: There is a company ‘XYZ Pvt. Ltd’ that promotes advertisements in the maximum traffic generating YouTube channels (on a weekly basis) to drive profits. To maximize profitability, the marketing team that manages the posting of advertisements requires an interface using which they can get a real-time list of YouTube channels for promoting advertisements and monitoring the analytics of traffic on those channels.
Objective: As an Azure Data Factory specialist, you are supposed to automate the transformation of the real-time video list from YouTube channels on a weekly basis. This will help the marketing team promote advertisements on the right YouTube videos on targeted channels.
Note: The traffic can be analyzed on various parameters like the number of views, and likes or comments on a particular day. You can get these publicly available data from the YouTube API.
Case Study 01: Non-Relational Data Stores
Problem Statement: Knowledge check of non-relational databases: Categories and where to use them
Topics: NoSQL or Non-Relational Database, Azure Data Lake Storage and its key components.
1.1 Scenarios where you can use NoSQL or Non-Relational Database.
1.2 categories of Non-Relational or No SQL databases with relevant Azure services.
1.3 Azure Data Lake Storage and its key components.
Case Study 02: Non-Relational Data Stores
Problem Statement: Copy data from Azure Blob Storage to Azure Data Lake Storage Gen2; Create an Azure Cosmos DB account and Demonstrate adding and removing regions from your Database account; Strategies for Partitioning data; Semantics of consistency levels in Cosmos DB
Topics: Azure Cosmos DB, Azure Data Factory, Blob Storage, Strategies for Partitioning Data, Semantics of consistency levels in Cosmos DB
2.1 Azure Blob Storage
2.2 Azure Data Lake Storage Gen2
2.3 Azure Cosmos DB
2.4 Partitioning data
2.5 Consistency levels
Case Study 03: Relational Data Stores
Problem Statement: Knowledge check of Relational databases: Deployment models in Azure SQL; Create an elastic pool, Azure SQL Security Capabilities; Import Data From Blob Storage to Azure Synapse Analytics by Using PolyBase
Topics: Azure SQL, PolyBase, Azure Synapse Analytics
3.1 Deployment models in Azure SQL
3.2 Elastic Pool
3.3 Azure Synapse Analytics
Case Study 04: Azure Batch, Azure Data Factory
Problem Statement: Working of Azure Batch; Flow Process of Data Factory; Types of Integration Runtime in Azure Data Factory; Transform data using Mapping data flows
Topics: Azure Batch, Data Factory, Integration Runtime, Mapping Data Flows
4.1 Working of Azure Batch
4.2 Integration Runtime in Azure Data Factory
4.3 Transform data using Mapping data flows
Case Study 05: Azure Data Bricks, Azure Stream Analytics
Problem Statement: ETL Operation by using Azure Databricks; Working of Stream Analytics; Stream Analytics Windowing Functions
Topics: Azure Data Bricks, Azure Stream Analytics, Windowing Functions
5.1 ETL operation by using Azure Databricks
5.2 Working of Stream Analytics
5.3 Windowing Functions
Case Study 06: Monitoring & Security
Problem Statement: Create, View, and Manage Metric alerts using Azure Monitor; Azure SQL Database Auditing
Topics: Azure Monitor, Alerts in Azure, Azure Security Logging & Auditing
6.1 Azure Monitor
6.3 Azure SQL Database Auditing
Introduction to Business Intelligence, understanding the concept of Data Modeling, Data Cleaning, learning about Data Analysis, Data Representation, Data Transformation.
Introduction to ETL, the various steps involved Extract, Transform, Load, using a user’s email ID to read a flat file, extracting the User ID from email ID, loading the data into a database table.
Introduction to Connection Managers – logical representation of a connection, the various types of Connection Managers – Flat file, database, understanding how to load faster with OLE DB, comparing the performance of OLE DB and ADO.net, learning about Bulk Insert, working with Excel Connection Managers and identifying the problems.
Learning what is Data Transformation, converting data from one format to another, understanding the concepts of Character Map, Data Column and Copy Column Transformation, import and export column transformation, script and OLEDB Command Transformation, understanding row sampling, aggregate and sort transformation, percentage and row sampling.
Understanding Pivot and UnPivot Transformation, understanding Audit and Row Count Transformation, working with Split and Join Transformation, studying Lookup and Cache Transformation, Integrating with Azure Analysis Services, elastic nature of MSBI to integrate with the Azure cloud service, scale out deployment option for MSBI, working with cloud-borne data sources and query analysis. Scaling out the SSIS package, deploying for tighter windows, working with larger amount of data sources, SQL Server vNext for enhancing SQL Server features, more choice of development languages and data types both on-premise and in the cloud.
Understanding data that slowly changes over time, learning the process of how new data is written over old data, best practices.Detail explanation of three types of SCDs –Type1, Type2 and Type3, and their differences.
Understanding how Fuzzy Lookup Transformation varies from Lookup Transformation, the concept of Fuzzy matching
Learning about error rows configuration, package logging, defining package configuration, understanding constraints and event handlers.
Get introduced to the SSRS Architecture, components of SSRS Report Building tool, learning about the data flow in different components.
Understanding the concepts of Matrix and Tablix, working with Text Box, learning about formatting, row/column grouping, understanding sorting, formatting, concepts of Header, Footer, Totals, Subtotals and Page Breaks.
Learning about Parameters, filter and visibility expression, understanding drill-through and drill-down, defining variables, custom code.
Introduction to various aspects of Bar Chart, Line Chart, Combination Chart, Shape Chart, Sub Reports,Integration of Power Query and M language with SSRS, working with additional data sources in MSBI, rich transformation capabilities addition to MSBI, reusing M functions build for PBIX in SSRS.
Learn how to build a Dashboard with Sparklines, Data Bars, Map Charts, Gauge Charts and drilling into reports, the basics of ad hoc reporting.
Data Bar, Sparkline, Indicator, Gauge Chart, Map Chart, Report Drilling, What is Ad hoc reporting?
Understanding Report Cache, Authorization, Authentication and Report Snapshot, learning about Subscriptions and Site Security.
Understanding the concept of multidimensional analysis, understanding SSAS Architecture and benefits, learn what is Cube, working with Tables and OLAP databases, understanding the concept of Data Sources, working with Dimension Wizard, understanding Dimension Structure, Attribute Relationships, flexible and rigid relationship.
Learning about Process Dimension, the Process database, creation of Cube, understanding Cube Structure, Cube browsing, defining the various categories, Product Key and Customer Key, Column Naming, processing and deploying a Cube, Report creation with a Cube.
Hands-on Exercise – Create a Cube and name various columns Deploy a cube after applying keys and other rules Create reports with a cube
Understanding Data Dimensions and its importance, the various relationships, regular, referenced, many to many, fact, working on Data Partitions, and Data Aggregations.
Learning about SSAS Cube, the various types of Cubes, the scope of Cube and comparison with Data Warehouse.
The various operations on Cube, the limitations of OLAP Cubes, the architecture of in-memory analytics and its advantages.
Deploying cube with existing data warehouse capabilities to get self-service business intelligence, understanding how in-memory analytics works.
Hands-on Exercise – Deploy cube to get self-service business intelligence
Logical model of the schema used by the Cube, components of Cube, understanding Named Queries and Relationships.
An overview of the Dimensions concept, describing the Attributes and Attributes Hierarchies, understanding Key/Value Pairs, Metadata Reload, logical keys and role-based dimensions.
Hands-on Exercise – Create role based dimensions, Use Attributes Hierarchies
Understanding the Measure of Cube, analyzing the Measure, exploring the relationship between Measure and Measure Group, Cube features and Dimension usage.
Working with Cube Measures, deploying analytics, understanding the Key Performance Indicators, deploying actions and drill-through actions on data, working on data partitions, aggregations, translations and perspectives.
Hands-on Exercise – Work with Cube Measures, Deploy analytics, Deploy actions and drill-through actions on data, Make data partitions
Understanding Multidimensional Expressions language, working with MDX queries for data retrieval, working with Clause, Set, Tuple, Filter condition in MDX.
Hands-on Exercise – Apply Clause, Set and filter condition in MDX query to retrieve data
Learning about MDX hierarchies, the functions used in MDX, Ancestor, Ascendant and Descendant function, performing data ordering
Hands-on Exercise – Create MDX hierarchies, Perform data ordering in ascending order, in descending order
Data Analysis Expressions (DAX), Using the EVALUATE and CALCULATE functions, filter DAX queries, create calculated measures, perform data analysis by using DAX
Hands-on Exercise – Use the EVALUATE and CALCULATE functions, filter DAX queries, create calculated measures, perform data analysis by using DAX
Designing and publishing a tabular data model, Designing measures relationships, hierarchies, partitions, perspectives, and calculated columns
Hands-on Exercise – Design and publish a tabular data model, Design measures relationships, hierarchies, partitions, perspectives, and calculated columns
Configuring and maintaining SQL Server Analysis Services (SSAS), Non-Union Memory Architecture (NUMA), Monitoring and optimizing performance, SSAS Tabular model with vNext, Excel portability, importing model from Power BI Desktop, importing a Power Pivot model, bidirectional cross-filtering relationship in MSBI.
Hands-on Exercise – Configure and maintain SQL Server Analysis Services (SSAS), Monitor and optimize performance
Reading data with R Server from SAS, txt, or excel formats, converting data to XDF format; Summarizing data, rxCrossTabs versus rxCube, extracting quantiles by using rxQuantile; Visualizing data (rxSummary and rxCube, rxHistogram and rxLinePlot) Processing data with rxDataStep Performing transforms using functions transformVars and transformEnvir Processing text using RML packages Building predictive models with ScaleR Performing in-database analytics by using SQL Server
Hands-on Exercise – Read data with R Server from SAS, txt or excel formats, convert data to XDF format; Summarize data, Extract quantiles by using rxQuantile; Visualize data (rxSummary, rxCube, rxHistogram and rxLinePlot) Perform transforms using functions transformVars and transformEnvir Build predictive models with ScaleR Perform in-database analytics by using SQL Server
Analyzing Data with SQL Server Reporting Services
In this MSBI project, you will be creating data flow tasks in SSIS, creating SSAS Cubes, creating an SSRS Report.
Project 1: SSIS
Problem Statement: Create a data flow task to extract data from the XLS format and store it into the SQL database, store the subcategory and category-wise sales in a table of the database. Once you get the output, split the dataset into two other tables. Table 1 should contain three columns (Sales < 100,000), Category, and Subcategory. Table 2 should contain (Sales > 100,000), Subcategory, and Category columns. Also, the Sales column should be sorted in both tables. Divide the whole dataset into a ratio of 70:30 percent and store the results in two different tables in the database
Topics: Data Flow, ODBC Set up and Connection Manager, Flat File Connection, Transformation, Import Export Transformation, Split and Join Transformation, Merge and Union All Transformation
Project 2: SSRS:
Problem Statement: In the United States, there are many stores in which a survey was conducted based on students. By using data set (Student Survey), try to extract meaningful Insights by creating an SSRS Report to show Tabular Visualization, Matrix Visualization, Funnel chart, Pie chart, Scatter plot, Drill-down.
Topics: Report Creation, Deployment, Concepts of Reporting Services, Tablix and Matrix, Expression and Parameters, Charts & Reports
Project 3: SSAS:
Problem Statement: Using Adventure Works DW 2014 database, Build a Cube to show the number of products there for each color, total Sales Amount for each currency, Count of products there for each product name. Create a reference relationship between the Product category table and subcategory table hence show how many subcategories are there for each category. Build the partitions for sales amount <= 700 and sales amount > 700. Make the aggregations and stop when performance gain reaches 40%. Build a perspective (Subset of the above cube) with limited measures and Dimension fields.
Topics: Dimensions, Data Dimensions & Cubes, Aggregations, Measures & features of Cube, SSAS Perspectives
Case Study 1: SSIS
Problem Statement: Create the connection of OLDB & load the data in SQL Server from excel; Create transformation where you have to split the people’s age group; How to create constants and events in package; Create a project level and package parameter at the package level; How to extract the data in an Incremental Order;
Topics: Data Flow, ODBC Set up and Connection Manager, Transformation, Split & Join Transformation, Term Extraction and Lookup
Case Study 2: SSRS
Problem Statement: Steps to add a correlated column chart; how to create a report server project; Use of Data Connections, Performance point Content library; Steps to create drill-down charts; How to pass the parameter from the main chart to detail chart (pie); Functions of Data bars and Sparklines; Usage of KPI box in SSRS Dashboard;
Topics: Concepts of Reporting Services, Report Creation, Expression and Parameters, Report and Authentication, Deployment
Case Study 3: SSAS
Problem Statement: Knowledge check on data mart, measures, dimension, cube, KPI’s, perspectives in SSIS
Topics: Dimensions, Data Dimensions & Cubes, Measures & features of Cube, SSAS Perspectives
How does Qlik Sense vary from QlikView, the need for self-service Business Intelligence/Business Analytics tools, Qlik Sense data discovery, intuitive tool for dynamic dashboards and personalized reports and the installation of Qlik Sense and Qlik Sense Desktop
Hands-on Exercise: Install Qlik Sense and Qlik Sense Desktop
Drag-and-drop visualization, Qlik Data indexing engine, data dimensions relationships, connect to multiple data sources, creating your own dashboards, data visualization, visual analytics and the ease of collaboration
Hands-on Exercise: Connect to a database or load data from an Excel file and create a dashboard
Understand data modeling, best practices, turning data columns into rows, converting data rows into fields, hierarchical-level data loading, loading new or updated data from database, using a common field to combine data from two tables and handling data inconsistencies
Hands-on Exercise: Turn data columns into rows, convert data rows into fields, load the data in hierarchical level, load new or updated data from database and use a common field to combine data from two tables
Qlik Sense data architecture, understanding QVD layer, converting QlikView files to Qlik Sense files and working on synthetic keys and circular references
Hands-on Exercise: Convert QlikView files to Qlik Sense files and resolve synthetic keys and circular references
Qlik Sense star schema, link table, dimensions table, master calendar, QVD files and optimizing data modeling
Hands-on Exercise: Create a Qlik Sense star schema, create link table, dimensions table, master calendar and QVD files
Qlik Sense enterprise class tools, Qlik Sense custom app, embedding visuals, rapid development, powerful open APIs, enterprise-class architecture, Big Data integration, enterprise security and elastic scaling
Learning about Qlik Sense visualization tools, charts and maps creation, rich data storytelling and sharing analysis visually with compelling visualizations
Hands-on Exercise: Create charts and maps, create a story around dataset and share analysis
Understanding set analysis in Qlik Sense, various parts of a set expression like identifiers, operators, modifiers and comparative analysis
Hands-on Exercise: Do Set Analysis in Qlik Sense, use set expression like identifiers, operators, modifiers and comparative analysis
Learning about set analysis which is a way of defining a set of data values different from normal set, deploying comparison sets and point-in-time analysis
Hands-on Exercise: Deploy comparison sets and perform point-in-time analysis
Introduction to various charts in Qlik Sense like line chart, bar chart, pie chart, table chart and pivot table chart and the characteristics of various charts
Hands-on Exercise: Plot charts in Qlik Sense like line chart, bar chart, pie chart, table chart and pivot table chart
Understanding what is a KPI chart, gauge chart, scatter plots chart and map chart/geo map
Hands-on Exercise: Plot a KPI chart, gauge chart, scatter plots chart and map chart/geo map
Introduction to the Qlik Sense Master Library, its benefits, distinct features and user-friendly applications
Hands-on Exercise: Explore and use Qlik Sense Master Library
Understanding how to do storytelling in Qlik Sense and the creation of storytelling and story playback
Hands-on Exercise: Use the storytelling feature of Qlik Sense, create a story and playback the story
Understanding mashups in Qlik Sense, creating a single graphical interface from more than one sources, deploying the mashups flowchart, testing of mashups and the various mashup scenarios like simple and normal
Hands-on Exercise: Create a single graphical interface from more than one sources, deploy the mashups flowchart and test mashups
Understanding the Qlik Sense Extension, working with it, various templates in Qlik Sense Extension, testing of it, making Hello World dynamic and learning how it works and adding a preview image
Hands-on Exercise: Work with Qlik Sense Extension, use a template in Qlik Sense Extension and test it, make Hello World dynamic and add a preview image
Various security aspects of Qlik Sense, content security, security rules, various components of security rules and understanding data reductions and dynamic data reductions and the user access workflow
Hands-on Exercise: Create security rules in Qlik Sense and understand data reductions and dynamic data reductions and the user access workflow
Objective: This project involves working with the Qlik Sense dashboard that displays the sales details whether order-wise, year-wise, customer-wise sales or product-wise sales and so on, doing comparative analysis, rolling six months analysis that should be displaying the trend of sales and placing the worksheets in a user story and publishing.
Domain: Data Analytics
Objective: To see the current values of salaries in one column and historical values in another cell in a chart that would contain a bar chart and a trend chart
Objective: Visual Mapping between the vaccination rate and measles outbreak
Evaluate installation requirements; design the installation of SQL Server and its components (drives, service accounts, etc.); plan scale-up vs. scale-out basics; plan for capacity, including if/when to shrink, grow, autogrow, and monitor growth; manage the technologies that influence SQL architecture (e.g., service broker, full text, scale out, etc.); design the storage for new databases (drives, filegroups, partitioning, etc.); design the database infrastructure; configure an SQL Server standby database for reporting purposes; Windows-level security and service-level security; core mode installation; benchmark a server before using it in a production environment (SQLIO, Tests on SQL Instance, etc.); and choose the right hardware
Test connectivity; enable and disable features; install SQL Server database engine and SSIS (but not SSRS and SSAS); and configure an OS disk
Restore vs. detach/attach; migrate security; migrate from a previous version; migrate to new hardware; and migrate systems and data from other sources
Set up and configure all SQL Server components (Engine, AS, RS, and SharePoint integration) in a complex and highly secure environment; configure full-text indexing; SSIS security; filestream; and filetable
Create, maintain, and monitor jobs; administer jobs and alerts; automate (setup, maintenance, monitoring) across multiple databases and multiple instances; send to “Manage SQL Server Agent jobs”
Design multiple file groups; database configuration and standardization: autoclose, autoshrink, recovery models; manage file space, including adding new filegroups and moving objects from one filegroup to another; implement and configure contained databases; data compression; configure TDE; partitioning; manage log file growth; DBCC
Configure and standardize a database: autoclose, autoshrink, recovery models; install default and named instances; configure SQL to use only certain CPUs (affinity masks, etc.); configure server level settings; configure many databases/instance, many instances/server, virtualization; configure clustered instances including MSDTC; memory allocation; database mail; configure SQL Server engine: memory, filffactor, sp_configure, default options
Install a cluster; manage multiple instances on a cluster; set up subnet clustering; recover from a failed cluster node
Install an instance; manage interaction of instances; SQL patch management; install additional instances; manage resource utilization by using Resource Governor; cycle error logs
Examine deadlocking issues using the SQL server logs using trace flags; design reporting database infrastructure (replicated databases); monitor via DMV or other MS product; diagnose blocking, live locking and deadlocking; diagnose waits; performance detection with built in DMVs; know what affects performance; and locate and if necessary kill processes that are blocking or claiming all resources
Monitor using Profiler; collect performance data by using System Monitor; collect trace data by using SQL Server Profiler; identify transactional replication problems; identify and troubleshoot data access problems; gather performance metrics; identify potential problems before they cause service interruptions; identify performance problems;, use XEvents and DMVs; create alerts on critical server condition; monitor data and server access by creating audit and other controls; identify IO vs. memory vs. CPU bottlenecks; and use the Data Collector tool
Implement a security strategy for auditing and controlling the instance; configure an audit; configure server audits; track who modified an object; monitor elevated privileges as well as unsolicited attempts to connect; and policy-based management
Manage different backup models, including point-in-time recovery; protect customer data even if backup media is lost; perform backup/restore based on proper strategies including backup redundancy; recover from a corrupted drive; manage a multi-TB database; implement and test a database implementation and a backup strategy (multiple files for user database and tempdb, spreading database files, backup/restore); back up a SQL Server environment; and back up system databases
Restore a database secured with TDE; recover data from a damaged DB (several errors in DBCC checkdb); restore to a point in time; file group restore; and page-level restore
Inspect physical characteristics of indexes and perform index maintenance; identify fragmented indexes; identify unused indexes; implement indexes; defrag/rebuild indexes; set up a maintenance strategy for indexes and statistics; optimize indexes (full, filter index); statistics (full, filter) force or fix queue; when to rebuild vs. reorg and index; full text indexes; and column store indexes
Transfer data; bulk copy; and bulk insert
Configure server security; secure the SQL Server using Windows Account / SQL Server accounts, server roles; create log in accounts; manage access to the server, SQL Server instance, and databases; create and maintain user-defined server roles; and manage certificate logins
Configure database security; database level, permissions; protect objects from being modified; auditing; and encryption
Create access to server / database with least privilege; manage security roles for users and administrators; create database user accounts; and contained login
Manage certificates and keys, and endpoints
Project : SQL Server Audit
Industry : General
Problem Statement : How to track and log events happening on the database engine
Topics : This project is involved with implementing an SQL Server audit that includes creating of the TestDB database, triggering audit events from tables, altering audit, checking, filtering, etc. You will learn to audit an SQL Server instance by tracking and logging the events on the system. You will work with SQL Server Management; learn about database level and Server level auditing.
Project 2 : Managing SQL Server for a high tech company
Industry : Information Technology
Problem Statement : An IT company wants to manage its MS SQL Server database and gain valuable insights from it.
Description : In this project you will be administrating the MS SQL Server database. You will learn about the complete architecture of MS SQL Server. You will be familiarized with the enterprise edition of SQL Server, various tools of SQL Server, creating and modifying databases in real-time.
Introducing Data Warehouse and Business Intelligence, understanding difference between database and data warehouse, working with ETL tools, SQL parsing.
Understanding the Data Warehousing Architecture, system used for Reporting and Business Intelligence, understanding OLAP vs. OLTP, introduction to Cubes.
The various stages from Conceptual Model, Logical Model to Physical Schema, Understanding the Cubes, benefits of Cube, working with OLAP multidimensional Cube, creating Report using a Cube.
Understanding the process of Data Normalization, rules of normalization for first, second and third normal, BCNF, deploying Erwin for generating SQL scripts.
The main components of Business Intelligence – Dimensions and Fact Tables, understanding the difference between Fact Tables & Dimensions, understanding Slowly Changing Dimensions in Data Warehousing.
SQL parsing, compilation and optimization, understanding types and scope of cubes, Data Warehousing Vs. Cubes, limitations of Cubes and evolution of in-memory analytics.
Learning the Erwin model, understanding the Design Layer Architecture, data warehouse modeling, creating and designing user defined domains, managing naming and data type standards.
Understanding of the forward and reverse engineering, comparison between the two.
Project 1– Logical & Physical Data Modeling Using ERWin
Data – Invoice Management (Sales)
Topics–This project is involved with creating logical and physical data model using the CA Erwin data modeler design layer architecture. You will learn about the techniques for turning a logical model into a physical design. With this project you will be well-versed in the process of reverse and forward engineering. You will understand both the top down and bottom up design methodology.
Project 2– End-to-End implementation of Data Warehouse (Retail Store)
Topics – In this project you will learn about the process of loading data into a data warehouse using the ETL tools. You will learn about the ways to create and deploy the data warehouse. Oracle provides support for multiple physical but only one logical model in the data warehouse. This project will provide you extensive experience in integrating, cleansing, customization and insertion of data in the data warehouse for a retail store.
Free Career Counselling
Business Intelligence Architect master’s course is a comprehensive course that is designed to clear multiple certifications such as:
The entire course content is in line with respective certification programs and helps you clear the requisite certification exam with ease and get the best jobs in top MNCs.
As part of this training, you will be working on real-time projects and assignments that have immense implications in the real-world industry scenarios, thus helping you fast-track your career effortlessly.
At the end of this training program, there will be quizzes that perfectly reflect the type of questions asked in the respective certification exams and helps you score better marks.
Intellipaat Course Completion Certificate will be awarded upon the completion of the project work (after expert review) and upon scoring at least 60% marks in the quiz. Intellipaat certification is well recognized in top 80+ MNCs like Ericsson, Cisco, Cognizant, Sony, Mu Sigma, Saint-Gobain, Standard Chartered, TCS, Genpact, Hexaware, etc.
Our Alumni works at top 3000+ companies
Intellipaat’s Master’s Course is a structured learning path especially designed by industry experts which ensures that you transform into a Business Intelligence expert. Individual courses at Intellipaat focus on one or two specializations. However, if you have to master Business Intelligence, then this program is for you.
Intellipaat’s Masters Course is a structured learning path specially designed by industry experts which ensures that you transform into Business Intelligence expert. Individual courses at Intellipaat focus on one or two specializations. However, if you have to master Business Intelligence then this program is for you
Intellipaat is the pioneer of Business Intelligence Architect training we provide:
Intellipaat offers the self-paced training and online instructor-led training.
Data Warehouse & Erwin, MS SQL, Tableau, MSBI, Informatica Developer & Admin, Power BI are online instructor-led courses
Data Warehousing, Microstrategy, QlikView are self-paced courses
At Intellipaat, you can enroll in either the instructor-led online training or self-paced training. Apart from this, Intellipaat also offers corporate training for organizations to upskill their workforce. All trainers at Intellipaat have 12+ years of relevant industry experience, and they have been actively working as consultants in the same domain, which has made them subject matter experts. Go through the sample videos to check the quality of our trainers.
Intellipaat is offering the 24/7 query resolution, and you can raise a ticket with the dedicated support team at anytime. You can avail of the email support for all your queries. If your query does not get resolved through email, we can also arrange one-on-one sessions with our trainers.
You would be glad to know that you can contact Intellipaat support even after the completion of the training. We also do not put a limit on the number of tickets you can raise for query resolution and doubt clearance.
Intellipaat offers self-paced training to those who want to learn at their own pace. This training also gives you the benefits of query resolution through email, live sessions with trainers, round-the-clock support, and access to the learning modules on LMS for a lifetime. Also, you get the latest version of the course material at no added cost.
Intellipaat’s self-paced training is 75 percent lesser priced compared to the online instructor-led training. If you face any problems while learning, we can always arrange a virtual live class with the trainers as well.
Intellipaat is offering you the most updated, relevant, and high-value real-world projects as part of the training program. This way, you can implement the learning that you have acquired in real-world industry setup. All training comes with multiple projects that thoroughly test your skills, learning, and practical knowledge, making you completely industry-ready.
You will work on highly exciting projects in the domains of high technology, ecommerce, marketing, sales, networking, banking, insurance, etc. After completing the projects successfully, your skills will be equal to 6 months of rigorous industry experience.
Intellipaat actively provides placement assistance to all learners who have successfully completed the training. For this, we are exclusively tied-up with over 80 top MNCs from around the world. This way, you can be placed in outstanding organizations such as Sony, Ericsson, TCS, Mu Sigma, Standard Chartered, Cognizant, and Cisco, among other equally great enterprises. We also help you with the job interview and résumé preparation as well.
You can definitely make the switch from self-paced training to online instructor-led training by simply paying the extra amount. You can join the very next batch, which will be duly notified to you.
Once you complete Intellipaat’s training program, working on real-world projects, quizzes, and assignments and scoring at least 60 percent marks in the qualifying exam, you will be awarded Intellipaat’s course completion certificate. This certificate is very well recognized in Intellipaat-affiliated organizations, including over 80 top MNCs from around the world and some of the Fortune 500companies.
Apparently, no. Our job assistance program is aimed at helping you land in your dream job. It offers a potential opportunity for you to explore various competitive openings in the corporate world and find a well-paid job, matching your profile. The final decision on hiring will always be based on your performance in the interview and the requirements of the recruiter.