Jumia builds a next-generation knowledge platform with metadata-driven specification frameworks


Jumia is a expertise firm born in 2012, current in 14 African international locations, with its predominant headquarters in Lagos, Nigeria. Jumia is constructed round a market, a logistics service, and a cost service. The logistics service allows the supply of packages by a community of native companions, and the cost service facilitates the funds of on-line transactions inside Jumia’s ecosystem. Jumia is current in NYSE and has a market cap of $554 million.

On this submit, we share a part of the journey that Jumia took with AWS Skilled Providers to modernize its knowledge platform that ran beneath a Hadoop distribution to AWS serverless based mostly options. Among the challenges that motivated the modernization have been the excessive price of upkeep, lack of agility to scale computing at particular instances, job queuing, lack of innovation when it got here to buying extra trendy applied sciences, advanced automation of the infrastructure and purposes, and the shortcoming to develop regionally.

Answer overview

The fundamental idea of the modernization mission is to create metadata-driven frameworks, that are reusable, scalable, and in a position to reply to the completely different phases of the modernization course of. These phases are: knowledge orchestration, knowledge migration, knowledge ingestion, knowledge processing, and knowledge upkeep.

This standardization for every section was thought-about as a strategy to streamline the event workflows and reduce the chance of errors that may come up from utilizing disparate strategies. This additionally enabled migration of various varieties of knowledge following an identical strategy whatever the use case. By adopting this strategy, the information dealing with is constant, extra environment friendly, and extra simple to handle throughout completely different tasks and groups. As well as, though the use instances have autonomy of their area from a governance perspective, on prime of them is a centralized governance mannequin that defines the entry management within the shared architectural parts. Importantly, this implementation emphasizes knowledge safety by implementing encryption throughout all providers, together with Amazon Easy Storage Service (Amazon S3) and Amazon DynamoDB. Moreover, it adheres to the precept of least privilege, thereby enhancing general system safety and decreasing potential vulnerabilities.

The next diagram describes the frameworks that have been created. On this design, the workloads within the new knowledge platform are divided by use case. Every use case requires the creation of a set of YAML information for every section, from knowledge migration to knowledge movement orchestration, and they’re mainly the enter of the system. The output is a set of DAGs that run the particular duties.

Overview

Within the following sections, we focus on the goals, implementation, and learnings of every section in additional element.

Information orchestration

The target of this section is to construct a metadata-driven framework to orchestrate the information flows alongside the entire modernization course of. The orchestration framework gives a sturdy and scalable answer that has the next capacities: dynamically create DAGs, combine natively with non-AWS providers, permit the creation of dependencies based mostly on previous executions, and add an accessible metadata technology per every execution. Subsequently, it was determined to make use of Amazon Managed Workflows for Apache Airflow (Amazon MWAA), which, by the Apache Airflow engine, gives these functionalities whereas abstracting customers from the administration operation.

The next is the outline of the metadata information which can be offered as a part of the information orchestration section for a given use case that performs the information processing utilizing Spark on Amazon EMR Serverless:

proprietor: # Use case proprietor
dags: # Checklist of DAGs to be created for this use case
  - title: # Use case title
    sort: # Kind of DAG (may very well be migration, ingestion, transformation or upkeep)
    tags: # Checklist of TAGs
    notification: # Defines notificacions for this DAGs
      on_success_callback: true
      on_failure_callback: true
    spark: # Spark job data 
      entrypoint: # Spark script 
      arguments: # Arguments required by the Spark script
      spark_submit_parameters: # Spark submit parameters. 

The thought behind all of the frameworks is to construct reusable artifacts that allow the event groups to speed up their work whereas offering reliability. On this case, the framework gives the capabilities to create DAG objects inside Amazon MWAA based mostly on configuration information (YAML information).

This explicit framework is constructed on layers that add completely different functionalities to the ultimate DAG:

  • DAGs – The DAGs are constructed based mostly on the metadata data offered to the framework. The info engineers don’t have to write down Python code in an effort to create the DAGs, they’re robotically created and this module is in control of performing this dynamic creation of DAGs.
  • Validations – This layer handles YAML file validation in an effort to stop corrupted information from affecting the creation of different DAGs.
  • Dependencies – This layer handles dependencies amongst completely different DAGs in an effort to deal with advanced interconnections.
  • Notifications – This layer handles the kind of notifications and alerts which can be a part of the workflows.

Orchestration

One side to contemplate when utilizing Amazon MWAA is that, being a managed service, it requires some upkeep from the customers, and it’s vital to have a very good understanding of the variety of DAGs and processes that you simply’re anticipated to have in an effort to fine-tune the occasion and acquire the specified efficiency. Among the parameters that have been fine-tuned in the course of the engagement have been core.dagbag_import_timeout, core.dag_file_processor_timeout, core.min_serialized_dag_update_interval, core.min_serialized_dag_fetch_interval, scheduler.min_file_process_interval, scheduler.max_dagruns_to_create_per_loop, scheduler.processor_poll_interval, scheduler.dag_dir_list_interval, and celery.worker_autoscale.

One of many layers described within the previous diagram corresponds to validation. This was an vital part for the creation of dynamic DAGs. As a result of the enter to the framework consists of YML information, it was determined to filter out corrupted information earlier than making an attempt to create the DAG objects. Following this strategy, Jumia might keep away from undesired interruptions of the entire course of. The module that truly builds DAGs solely receives configuration information that comply with the required specs to efficiently create them. In case of corrupted information, data relating to the precise points is logged into Amazon CloudWatch in order that builders can repair them.

Information migration

The target of this section is to construct a metadata-driven framework for migrating knowledge from HDFS to Amazon S3 with Apache Iceberg storage format, which entails the least operational overhead, gives scalability capability throughout peak hours, and ensures knowledge integrity and confidentiality.

The next diagram illustrates the structure.

Migration

Throughout this section, a metadata-driven framework inbuilt PySpark receives a configuration file as enter in order that some migration duties can run in an Amazon EMR Serverless job. This job makes use of the PySpark framework because the script location. Then the orchestration framework described beforehand is used to create a migration DAG that runs the next duties:

  1. The primary job creates the DDLs in Iceberg format within the AWS Glue Information Catalog utilizing the migration framework inside an Amazon EMR Serverless job.
  2. After the tables are created, the second job transfers HDFS knowledge to a touchdown bucket in Amazon S3 utilizing AWS DataSync to sync buyer knowledge. This course of brings knowledge from all of the completely different layers of the information lake.
  3. When this course of is full, a 3rd job converts knowledge to Iceberg format from the touchdown bucket to the vacation spot bucket (uncooked, course of, or analytics) utilizing once more an alternative choice of the migration framework embedded in an Amazon EMR Serverless job.

Information switch efficiency is best when the dimensions of the information to be transferred is round 128–256 MB, so it’s really helpful to compress the information on the supply. By decreasing the variety of information, metadata evaluation and integrity phases are diminished, rushing up the migration section.

Information ingestion

The target of this section is to implement one other framework based mostly on metadata that responds to the 2 knowledge ingestion fashions. A batch mode is chargeable for extracting knowledge from completely different knowledge sources (comparable to Oracle or PostgreSQL) and a micro-batch-based mode extracts knowledge from a Kafka cluster that, based mostly on configuration parameters, has the capability to run native streams in streaming.

The next diagram illustrates the structure for the batch and micro-batch and streaming strategy.

Ingestion

Throughout this section, a metadata-driven framework builds the logic to carry knowledge from Kafka, databases, or exterior providers, that can be run utilizing an ingestion DAG deployed in Amazon MWAA.

Spark Structured Streaming was used to ingest knowledge from Kafka subjects. The framework receives configuration information in YAML format that point out which subjects to learn, what extraction processes ought to be carried out, whether or not it ought to be learn in streaming or micro-batch, and by which vacation spot desk the knowledge ought to be saved, amongst different configurations.

For batch ingestion, a metadata-driven framework written in Pyspark was carried out. In the identical means because the earlier one, the framework acquired a configuration in YAML format with the tables to be migrated and their vacation spot.

One of many points to contemplate in one of these migration is the synchronization of knowledge from the ingestion section and the migration section, in order that there isn’t any lack of knowledge and that knowledge just isn’t reprocessed unnecessarily. To this finish, an answer has been carried out that saves the timestamps of the final historic knowledge (per desk) migrated in a DynamoDB desk. Each kinds of frameworks are programmed to make use of this knowledge the primary time they’re run. For micro-batching use instances, which use Spark Structured Streaming, Kafka knowledge is learn by assigning the worth saved in DynamoDB to the startingTimeStamp parameter. For all different executions, precedence can be given to the metadata within the checkpoint folder. This fashion, you may make positive ingestion is synchronized with the information migration.

Information processing

The target on this section was to have the ability to deal with updates and deletions of knowledge in an object-oriented file system, so Iceberg is a key answer that was adopted all through the mission as delta lake information due to its ACID capabilities. Though all phases use Iceberg as delta information, the processing section makes intensive use of Iceberg’s capabilities to do incremental processing of knowledge, creating the processing layer utilizing UPSERT utilizing Iceberg’s skill to run MERGE INTO instructions.

The next diagram illustrates the structure.

Processing

The structure is just like the ingestion section, with simply modifications to the information supply to be Amazon S3. This strategy quickens the supply section and maintains high quality with a production-ready answer.

By default, Amazon EMR Serverless has the spark.dynamicAllocation.enabled parameter set to True. This feature scales up or down the variety of executors registered throughout the software, based mostly on the workload. This brings loads of benefits when coping with various kinds of workloads, however it additionally brings concerns when utilizing Iceberg tables. For example, whereas writing knowledge into an Iceberg desk, the Amazon EMR Serverless software can use a lot of executors in an effort to pace up the duty. This may end up in reaching Amazon S3 limits, particularly the variety of requests per second per prefix. Because of this, it’s vital to use good knowledge partitioning practices.

One other vital side to contemplate in these instances is the item storage file structure. By default, Iceberg makes use of the Hive storage structure, however it may be set to make use of ObjectStoreLocationProvider. By setting this property, a deterministic hash is generated for every file, with a hash appended immediately after write.knowledge.path. This will significantly reduce throttle requests based mostly on object prefix, in addition to maximize throughput for Amazon S3 associated I/O operations, as a result of the information written are equally distributed throughout a number of prefixes.

Information upkeep

When working with knowledge lake desk codecs comparable to Iceberg, it’s important to interact in routine upkeep duties to optimize desk metadata file administration, stopping a lot of pointless information from accumulating and promptly eradicating any unused information. The target of this section was to construct one other framework that may carry out most of these duties on the tables throughout the knowledge lake.

The next diagram illustrates the structure.

Maintenance

The framework, in addition to the opposite ones, receives a configuration file (YAML information) indicating the tables and the listing of upkeep duties with their respective parameters. It was constructed on PySpark in order that it might run as an Amazon EMR Serverless job and may very well be orchestrated utilizing the orchestration framework identical to the opposite frameworks constructed as a part of this answer.

The next upkeep duties are supported by the framework:

  • Expire snapshots – Snapshots can be utilized for rollback operations in addition to time touring queries. Nevertheless, they will accumulate over time and might result in efficiency degradation. It’s extremely really helpful to usually expire snapshots which can be now not wanted.
  • Take away outdated metadata information – Metadata information can accumulate over time identical to snapshots. Eradicating them usually can also be really helpful, particularly when coping with streaming or micro-batching operations, which was one of many instances of the general answer.
  • Compact information – Because the variety of knowledge information will increase, the variety of metadata saved within the manifest information additionally will increase, and small knowledge information can result in much less environment friendly queries. As a result of this answer makes use of a streaming and micro-batching software writing into Iceberg tables, the dimensions of the information tends to be small. Because of this, a way to compact information was crucial to boost the general efficiency.
  • Laborious delete knowledge – One of many necessities was to have the ability to carry out exhausting deletes within the knowledge older than a sure time period. This means eradicating expiring snapshots and eradicating metadata information.

The upkeep duties have been scheduled with completely different frequencies relying on the use case and the particular job. Because of this, the schedule data for this duties is outlined in every of the YAML information of the particular use case.

On the time this framework was carried out, there was no any automated upkeep answer on prime of Iceberg tables. At AWS re:Invent 2024, Amazon S3 Tables performance has been launched to automatize the upkeep of Iceberg Tables . This performance automates file compaction, snapshot administration, and unreferenced file elimination.

Conclusion

Constructing an information platform on prime of standarized frameworks that use metadata for various points of the information dealing with course of, from knowledge migration and ingestion to orchestration, enhances the visibility and management over every of the phases and considerably quickens implementation and improvement processes. Moreover, by utilizing providers comparable to Amazon EMR Serverless and DynamoDB, you may carry all the advantages of serverless architectures, together with scalability, simplicity, versatile integration, improved reliability, and cost-efficiency.

With this structure, Jumia was in a position to cut back their knowledge lake price by 50%. Moreover, with this strategy, knowledge and DevOps groups have been in a position to deploy full infrastructures and knowledge processing capabilities by creating metadata information together with Spark SQL information. This strategy has diminished turnaround time to manufacturing and diminished failure charges. Moreover, AWS Lake Formation offered the capabilities to collaborate and govern datasets on varied storage layers on the AWS platform and externally.

Leveraging AWS for our knowledge platform has not solely optimized and diminished our infrastructure prices but in addition standardized our workflows and methods of working throughout knowledge groups and established a extra reliable single supply of reality for our knowledge property. This transformation has boosted our effectivity and agility, enabling quicker insights and enhancing the general worth of our knowledge platform.

– Hélder Russa, Head of Information Engineering at Jumia Group.

Take step one in direction of streamlining the information migration course of now, with AWS.


In regards to the Authors

Ramón Díez is a Senior Buyer Supply Architect at Amazon Net Providers. He led the mission with the agency conviction of utilizing expertise in service of the enterprise.

Paula Marenco is a Information Architect at Amazon Net Providers, she enjoys designing analytical options that carry gentle into complexity, turning intricate knowledge processes into clear and actionable insights. Her work focuses on making knowledge extra accessible and impactful for decision-making.

 Hélder Russa is the Head of Information Engineering at Jumia Group, contributing to the technique definition, design, and implementation of a number of Jumia knowledge platforms that help the general decision-making course of, in addition to operational options, knowledge science tasks, and real-time analytics.

Pedro Gonçalves is a Principal Information Engineer at Jumia Group, chargeable for designing and overseeing the information structure, emphasizing on AWS Platform and datalakehouse applied sciences to make sure strong and agile knowledge options and analytics capabilities.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here