Join our newsletter

#noSpamWePromise
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
cross-icon
Subscribe
How To
What is data modeling and how can you model data for higher analytical outputs?
Discover everything about data modeling - from its benefits to models and execution. Learn how to speed up your model deployment with the help of Keboola.

Being data-driven helps businesses to cut costs and produce higher returns on investments, increasing their financial viability in the fight for a piece of the market pie.

But *becoming* data-driven is a more labor-intensive process. In the same way that companies must align themselves around business objectives, data professionals must align their data around data models. 

In other words: if you want to run a successful data-driven operation, you need to model your data first.

Are you already familiar with data modeling? Start your free trial and speed up model deployment.

What is data modeling?

Data modeling is the process of discovering and imposing structure on raw data. 

Organizing your raw data into a structured form is the first step in building a database or data warehouse, and it is essential for gaining valuable business insights from your data.


Data modeling is not just necessary for nailing the “Load” aspect of your ETL, it also brings several advantages for the entire organization.

The Benefits of Data Modeling

Modeling data offers multiple benefits to businesses:

  1. Clear, available, and trustworthy data. Modeled data is easy to understand and its interpretation is unambiguous. It takes the guesswork out of the equation and makes data available for whatever work you need it for. Properly modeled data is trustworthy, so you don’t have to spend time verifying data before using it.
  2. Faster time to insights. Because data is clear, available, and trustworthy, you can move through the data cleaning and verification cycles faster, speeding up the time it takes to acquire insights. This also means that you’ll have a competitive edge over other businesses in your industry, who are still trying to figure out the wheel of the data-driven car.
  3. Lower maintenance costs. Having a robust data model reduces your need for monitoring and maintenance. The design of the data model already foresees possible issues with incoming data and database scaling, so you can monitor for those in advance and correct them faster, thus lowering the costs on your bottom line.
  4. Cross-company alignment. Clear data models drive alignment across company departments. For example, finance might define a new customer the moment that an invoice is processed (revenue), while sales look at the date on which a customer paid the invoice (cashflow). Now, both definitions are great, but if your customer table has a column labeled “Date of customer acquisition”, both departments can agree that information is used to count the number of new customers. Whether the column is based upon revenue or cashflow is irrelevant. What data modeling does is unite disparate interpretations across the company into an aligned view of business operations. This means fewer meetings with stakeholders in which you attempt to get everyone on the same page, as well as less assistance when it comes to interpretation of data analyses. This will provide you with more time for cutting-edge work, boosting your company’s growth.
  5. Faster product development. If you run a business which uses data for product development, data modeling will speed up your operations. Why? Because data will be available in the required form without your product engineers waiting on your data warehouse administrators to re-shape it to fit production needs.
  6. Easier scaling. Modeling data provides foresight into how your company’s data needs will change. A properly modeled database or data warehouse will be able to scale faster and grow alongside your company’s expansion. Instead of staggering behind and playing catchup, your data operations will be able to support your business as it scales.

How to choose the right data model

Depending on your business needs and the intrinsic structure of your data, there are three high-level ways of modeling it:

The relational model

The relational model is the most common way of modeling data. You break down your raw data into entities and relations (ER modeling for short). Entities are business entities or units, such as customers, products, shipping providers, etc. Each entity becomes a table (as seen below), and the attributes of each entity (e.g. customer name, customer address, and customer email) become columns in these tables. The relationships between entities are referred to as foreign keys. So, a Customer ID of 24221, which uniquely identifies a customer in the Customer table, would appear as a foreign key 24221 in the Purchase table. The most frequently used databases for relational models are MySQL and Postgres.

relational model example


The graph model

The graph model is similar to the relational model in that it also has entities (called nodes) and relationships between them. However, unlike the relational model, the graph model allows for many types of relationships (bidirectional, recursive, and more) and it does not break the data down into tables. Instead, the graph model tries to mimic the interconnectedness of data without artificially separating it into groups. For this reason, it is commonly applied by businesses that model networks (e.g. social media and telecommunications). The most commonly used database for graph modeling is Neo4j

graph model example

The document model

Unlike the relational and graph models, the document model works with non-structured or semi-structured data. Think about documents such as legal contracts or transcripts. Sure, there is logic to them, but it’s unclear how you would separate multiple legal contracts into a predefined set of predictable entities/nodes. Instead, each document is represented with a key (document identifier), and the document’s contents are parsed by specialized engines to create their value (index over tokenized document contents) in a key-value pair association. The modeling of the data is often done at retrieval (analysis time) rather than when the data is entered into the database. This type of modeling is more exploratory and not as strictly defined as the relational and graph models. If you are looking at modeling documents, check out Elasticsearch.

So, which one is right for you? There are three criteria to bear in mind when picking your model:

  1. Nature of business problem. If your business problem is intrinsically structured as a graph (social media app), relational table (e-commerce transactions), or document (legal document reader), pick the model which best reflects that structure.
  2. Foreseeable analyses. Choose the model that will outperform the others depending on the type of analyses that you’ll perform once your data has been modeled. If you expect heavy NLP or language processing, opt for the document model. If you are going into OLAP, choose the relational model. Perhaps you want to apply similarity algorithms or traverse paths in your analyses? The graph model will suit you best.
  3. Technical expertise and operational constraints. The majority of data operatives are comfortable with relational tables, but not with document or graph storages. If you have the expertise, or your project gives you enough flexibility (in terms of resources and deadlines) to develop it on the job, go for the model which is most interesting to you.  

Let’s say that you’ve established which model is best suited for your data. Where do you start?

How to model data: the 3-step process 

Modeling data is like modeling your dream home. You start with a concept (“I want a master bedroom and at least two bathrooms”), hire an architect to draw up the official sketch of your home, and finally hire construction workers who decide where the plumbing and electricity will go to make your dream come true.

Modeling data is also a process, which happens on three different levels of abstraction:

  1. Conceptual data model layer - sets out the main ideas of what data should look like.
  2. Logical data model layer - determines the overall logic and constraints for the data model.
  3. Physical data model layer - implements the data model on a physical level.

Let’s look at a concrete example to gain a better understanding:

data modeling example


Use Keboola to speed up your data model deployment

Keboola is an all-in-one data operations platform, which was built to enable data professionals to achieve their data goals faster.


How can Keboola help you to deploy your data models?

You’ve agreed on the conceptual level and fleshed out the logical level. The next step is to implement your data model on the physical level.

This part of model deployment can be gruesomely long and unnecessarily difficult. Coding out every single table, their mutual relationships and imposing constraints... it all requires spilling a lot of ‘ink’ over code lines. This ultimately delays the model launch.

Instead, opting for Keboola will speed up model deployment. With its click-and-play design, you can implement your data model in a matter of minutes, rather than weeks. Here’s how:

  1. Connect all of your incoming raw data with your target database. No need to self-code data extractors.
  2. Automatically transform the raw data into your desired form. Implement all of the modeling constraints that your data needs. Exclude irrelevant data, filter out only the information that needs to be included, and reshape the data into the relational/graph/document form that is required. Keboola’s transformation layer is reusable as well, which allows you to repurpose this solution across your other models to save time. 
  3. Deploy your database (regardless of the model you choose). Keboola allows you to connect your raw data (from Keboola’s extractors), through your transformations, into the data storage of your choice in a matter of minutes. No need to code out your logic within an ORM; simply click the desired database and select the destination table in which your data is stored.
  4. Automate the entire end-to-end process to speed up data model deployment. Keboola’s Orchestrator is an extremely powerful piece of software, which can be used to automate other (non-data-modeling) pipelines as well. But when unleashed upon data modeling, it allows you to rerun your models without overhead, schedule modeling to keep your data fresh, and use it as a baseline to compare your current model to any model changes.

It’s time to speed up your model deployment. Why wait for weeks at a time when modeling can be done in minutes? Start your model deployment today.


Stay in touch

Download the files
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.