1 :-
What is the Azure service for real-time stream processing and analytics?
2 :-
You build a data warehouse in an Azure Synapse Analytics dedicated SQL pool.Analysts write a complex SELECT query that contains multiple JOIN and CASE statements to transform data for use in inventory reports. The inventory reports will use the data and additional WHERE parameters depending on the report. The reports will be produced once daily.You need to implement a solution to make the dataset available for the reports. The solution must minimize query times.What should you implement?
3 :-
How can you improve the performance of an Azure Data Factory pipeline?
4 :-
You have an Azure Databricks workspace that contains a Delta Lake dimension table named Table1.Table1 is a Type 2 slowly changing dimension (SCD) table.You need to apply updates from a source table to Table1.Which Apache Spark SQL operation should you use?
5 :-
Which Azure service provides on-demand serverless compute power for data analysis?
6 :-
What is the key benefit of using Azure Databricks for data engineering tasks?
7 :-
8 :-
You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1. Table1 contains the following:✑ One billion rows✑ A clustered columnstore index✑ A hash-distributed column named Product Key✑ A column named Sales Date that is of the date data type and cannot be nullThirty million rows will be added to Table1 each month.You need to partition Table1 based on the Sales Date column. The solution must optimize query performance and data loading.How often should you create a partition?
9 :-
What command would you use in Azure Databricks to start a Spark session?
10 :-
You are designing an Azure Data Lake Storage solution that will transform raw JSON files for use in an analytical workload.You need to recommend a format for the transformed files. The solution must meet the following requirements:✑ Contain information about the data types of each column in the files.✑ Support querying a subset of columns in the files.✑ Support read-heavy analytical workloads.✑ Minimize the file size.What should you recommend?
11 :-
What tool would you use in Azure Synapse Analytics to create interactive visualizations of your data?
12 :-
You are planning a solution to aggregate streaming data that originates in Apache Kafka and is output to Azure Data Lake Storage Gen2. The developers who will implement the stream processing solution use Java.Which service should you recommend using to process the streaming data?
13 :-
Which component of Azure Data Factory defines the connection to external data sources?
14 :-
Which Azure service offers a fully managed data warehouse solution for large-scale analytics?
15 :-
You are implementing a batch dataset in the Parquet format.Data files will be produced be using Azure Data Factory and stored in Azure Data Lake Storage Gen2. The files will be consumed by an Azure Synapse Analytics serverless SQL pool.You need to minimize storage costs for the solution.What should you do?
16 :-
Which Azure service is mainly used for storing and analyzing large datasets in a distributed manner?
17 :-
You have an Azure Data Lake Storage Gen2 container that contains 100 TB of data.You need to ensure that the data in the container is available for read workloads in a secondary region if an outage occurs in the primary region. The solution must minimize costs.Which type of data redundancy should you use?
18 :-
Which Azure service provides managed notebooks for interactive data exploration and analysis?
19 :-
You have an Azure subscription that contains an Azure Data Lake Storage Gen2 account named account1 and an Azure Synapse Analytics workspace named workspace1.You need to create an external table in a serverless SQL pool in workspace1. The external table will reference CSV files stored in account1. The solution must maximize performance.How should you configure the external table?
20 :-
What is the main purpose of Azure Data Catalog?
Tips for improving your score: