FUNDAMENTALS OF COMPUTER

DATABASE FUNDAMENTALS

BASICS OF BIG DATA

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
Scrapping within ETL usually falls within:
A
Extract
B
Transform
C
Load
D
None of the above
Explanation: 

Detailed explanation-1: -Web scraping is one form of ETL: you extract data from a website, transform it to fit the format you want, and load it into a CSV file. To extract data from the web, you need to know a few basics about HTML, the backbone of each web page you see on the internet.

Detailed explanation-2: -Step 1: Extraction In this first step of the ETL process, structured and unstructured data is imported and consolidated into a single repository. Volumes of data can be extracted from a wide range of data sources, including: Existing databases and legacy systems. Cloud, hybrid, and on-premises environments.

Detailed explanation-3: -But let’s put it into simpler terms. Web scraping refers to the process of extracting data from web sources and structuring it into a more convenient format. It does not involve any data processing or analysis. Data mining refers to the process of analyzing large datasets to uncover trends and valuable insights.

Detailed explanation-4: -The extraction step of an ETL process involves connecting to the source systems, and both selecting and collecting the necessary data needed for analytical processing within the data warehouse or data mart.

There is 1 question to complete.