Skip to content
Resources > Academy > Advanced Time Series

Importing Data – Batch

Login or sign up for free CrateDB Academy: CrateDB Fundamentals

Dask is an open-source library designed to parallelize and scale out Python code, with a particular focus on numerical and analytical computing. At its core, Dask enables large-scale computations to be executed efficiently on single machines or across distributed clusters. In this notebook, we will show how to use Dask and CrateDB for the large-scale import of weather readings from 1,250 cities globally. This rich dataset can be found on Kaggle and includes metrics like temperature ranges, precipitation, wind speed, and more, allowing us to analyze weather patterns comprehensively. 

As a first step, we set up our environment by running a pip command. This command installs Dask alongside Pandas for data manipulation and SQLAlchemy for database operations. 

In this notebook, we're working with three key datasets: 

  • Daily Weather Data dataset contains the weather measurements across various cities. 
  • Cities dataset provides details on the cities included in our weather dataset. 
  • Similarly, countries dataset contains information about the countries corresponding to the cities in our dataset. 

To efficiently handle these datasets, especially the potentially large Parquet file containing daily weather data, we use Dasks’ DataFrame API. By leveraging Dask, we can handle large datasets that exceed the memory capacity of a single machine, processing data in chunks without loading the entire dataset into memory at once. 

Once we've loaded the data, we examine its structure and content. We look for the number of records, and the data types of each column, and get a glimpse of the first few rows to understand the data format.   

The next step loads data into CrateDB. CrateDB is uniquely equipped to handle large datasets like ours with efficiency and flexibility. To take full advantage of CrateDB's capabilities, including fulltext indexes, we'll manually create our tables before the import. The two connection strings illustrate how to establish a connection to our CrateDB instance, depending on whether your database is hosted locally or in the cloud. 

For those using CrateDB Cloud, the platform's import mechanism is the most straightforward and efficient method, as it directly uploads data to a staging area for import, minimizing network transfer. 

However, if you're running CrateDB locally or prefer a more hands-on approach, Dask's parallelized data frame import is highly recommended. Unlike Pandas, which is single-threaded, Dask allows us to utilize multiple CPUs and partition the data for parallel import, significantly speeding up the process. 

When importing data with Dask, it's crucial to balance the workload to avoid overloading the database. Experimenting with the chunk size can help find the optimal setting for your specific CrateDB configuration. For example, a chunk size of 10,000 records has been shown to work well on a single node CrateDB setup with 4 GB of heap memory. Running the import on a CrateDB instance with adequate resources can yield impressive performance. In one scenario, a Docker containerized CrateDB with 5 CPUs and 4 GB of heap space achieved around 80,000 inserts per second, including indexing.    

To conclude, importing data into CrateDB requires a thoughtful approach to table creation, connection management, and data import strategy. By leveraging CrateDB's fulltext indexing and Dask's parallel processing capabilities, we can efficiently import large datasets while optimizing for query performance.

Take this course for free