In our earlier put up Backtesting index rebalancing arbitrage with Amazon EMR and Apache Iceberg, we confirmed the way to use Apache Iceberg within the context of technique backtesting. On this put up, we give attention to information administration implementation choices similar to accessing information straight in Amazon Easy Storage Service (Amazon S3), utilizing in style information codecs like Parquet, or utilizing open desk codecs like Iceberg. Our experiments are based mostly on real-world historic full order e book information, offered by our companion CryptoStruct, and evaluate the trade-offs between these decisions, specializing in efficiency, value, and quant developer productiveness.
Information administration is the muse of quantitative analysis. Quant researchers spend roughly 80% of their time on essential however not impactful information administration duties similar to information ingestion, validation, correction, and reformatting. Conventional information administration decisions embody relational, SQL, NoSQL, and specialised time sequence databases. Lately, advances in parallel computing within the cloud have made object shops like Amazon S3 and columnar file codecs like Parquet a most popular selection.
This put up explores how Iceberg can improve quant analysis platforms by enhancing question efficiency, decreasing prices, and growing productiveness, in the end enabling sooner and extra environment friendly technique improvement in quantitative finance. Our evaluation reveals that Iceberg can speed up question efficiency by as much as 52%, cut back operational prices, and considerably enhance information administration at scale.
Having chosen Amazon S3 as our storage layer, a key choice is whether or not to entry Parquet recordsdata straight or use an open desk format like Iceberg. Iceberg provides distinct benefits via its metadata layer over Parquet, similar to improved information administration, efficiency optimization, and integration with numerous question engines.
On this put up, we use the time period vanilla Parquet to consult with Parquet recordsdata saved straight in Amazon S3 and accessed via normal question engines like Apache Spark, with out the extra options offered by desk codecs similar to Iceberg.
Quant developer and researcher productiveness
On this part, we give attention to the productiveness options provided by Iceberg and the way it compares to straight studying recordsdata in Amazon S3. As talked about earlier, 80% of quantitative analysis work is attributed to information administration duties. Enterprise influence closely depends on high quality information (“rubbish in, rubbish out”). Quants and platform groups must ingest information from a number of sources with completely different velocities and replace frequencies, after which validate and proper the info. These actions translate into the flexibility to run append, insert, replace, and delete operations. For easy append operations, each Parquet on Amazon S3 and Iceberg supply related comfort and productiveness. Nevertheless, real-world information is rarely good and must be corrected. Gaps filling (inserts), error corrections and restatements (updates), and eradicating duplicates (deletes) are the obvious examples. When writing information within the Parquet format on to Amazon S3 with out utilizing an open desk format like Iceberg, you must write code to determine the affected partition, appropriate errors, and rewrite the partition. Furthermore, if the write job fails or a downstream learn job happens throughout this write operation, all downstream jobs have the potential of studying inconsistent information. Nevertheless, Iceberg has built-in insert, replace, and delete options with ACID (Atomicity, Consistency, Isolation, Sturdiness) properties, and the framework itself manages the Amazon S3 mechanics in your behalf.
Guarding towards lookahead bias is a vital functionality of any quant analysis platform—what backtests as a worthwhile buying and selling technique can render itself ineffective and unprofitable in actual time. Iceberg supplies time journey and snapshotting capabilities out of the field to handle lookahead bias that may very well be embedded within the information (similar to delayed information supply).
Simplified information corrections and updates
Iceberg enhances information administration for quants in capital markets via its sturdy insert, delete, and replace capabilities. These options permit environment friendly information corrections, gap-filling in time sequence, and historic information updates with out disrupting ongoing analyses or compromising information integrity.
Not like direct Amazon S3 entry, Iceberg helps these operations on petabyte-scale information lakes with out requiring advanced customized code. This simplifies information modification processes, which is essential for ingesting and updating giant volumes of market and commerce information, shortly iterating on backtesting and reprocessing workflows, and sustaining detailed audit trails for danger and compliance necessities.
Iceberg’s desk format separates information recordsdata from metadata recordsdata, enabling environment friendly information modifications with out full dataset rewrites. This method additionally reduces costly ListObjects
API calls usually wanted when straight accessing Parquet recordsdata in Amazon S3.
Moreover, Iceberg provides merge on learn (MoR) and duplicate on write (CoW) approaches, offering flexibility for various quant analysis wants. MoR allows sooner writes, appropriate for steadily up to date datasets, and CoW supplies sooner reads, helpful for read-heavy workflows like backtesting.
For instance, when a brand new information supply or attribute is added, quant researchers can seamlessly incorporate it into their Iceberg tables after which reprocess historic information, assured they’re utilizing appropriate, time-appropriate data. This functionality is especially helpful in sustaining the integrity of backtests and the reliability of buying and selling methods.
In situations involving large-scale information corrections or updates, similar to adjusting for inventory splits or dividend funds throughout historic information, Iceberg’s environment friendly replace mechanisms considerably cut back processing time and useful resource utilization in comparison with conventional strategies.
These options collectively enhance productiveness and information administration effectivity in quant analysis environments, permitting researchers to focus extra on technique improvement and fewer on information dealing with complexities.
Historic information entry for backtesting and validation
Iceberg’s time journey function can allow quant builders and researchers to entry and analyze historic snapshots of their information. This functionality may be helpful whereas performing duties like backtesting, mannequin validation, and understanding information lineage.
Iceberg simplifies time journey workflows on Amazon S3 by introducing a metadata layer that tracks the historical past of modifications made to the desk. You may consult with this metadata layer to create a psychological mannequin of how Iceberg’s time journey functionality works.
Iceberg’s time journey functionality is pushed by an idea referred to as snapshots, that are recorded in metadata recordsdata. These metadata recordsdata act as a central repository that shops desk metadata, together with the historical past of snapshots. Moreover, Iceberg makes use of manifest recordsdata to supply a illustration of information recordsdata, their partitions, and any related deleted recordsdata. These manifest recordsdata are referenced within the metadata snapshots, permitting Iceberg to determine the related information for a particular cut-off date.
When a consumer requests a time journey question, the everyday workflow includes querying a particular snapshot. Iceberg makes use of the snapshot identifier to find the corresponding metadata snapshot within the metadata recordsdata. The time journey functionality is invaluable to quants, enabling them to backtest and validate methods towards historic information, reproduce and debug points, carry out what-if evaluation, adjust to rules by sustaining audit trails and reproducing previous states, and roll again and get well from information corruption or errors. Quants also can acquire deeper insights into present market developments and correlate them with historic patterns. Additionally, the time journey function can additional mitigate any dangers of lookahead bias. Researchers can entry the precise information snapshots that had been current prior to now, after which run their fashions and techniques towards this historic information, with out the danger of inadvertently incorporating future data.
Seamless integration with acquainted instruments
Iceberg supplies quite a lot of interfaces that allow seamless integration with the open supply instruments and AWS providers that quant builders and researchers are aware of.
Iceberg supplies a complete SQL interface that permits quant groups to work together with their information utilizing acquainted SQL syntax. This SQL interface is suitable with in style question engines and information processing frameworks, similar to Spark, Trino, Amazon Athena, and Hive. Quant builders and researchers can use their present SQL information and instruments to question, filter, combination, and analyze their information saved in Iceberg tables.
Along with the first interface of SQL, Iceberg additionally supplies the DataFrame API, which permits quant groups to programmatically work together with their information with in style distributed information processing frameworks like Spark and Flink in addition to skinny purchasers like PyIceberg. Quants can additional use this API to construct extra programmatic approaches to entry and manipulate information, permitting for the implementation of customized logic and integration of Iceberg with different AWS ecosystems like Amazon EMR.
Though accessing information from Amazon S3 is a viable choice, Iceberg supplies a number of benefits like metadata administration, efficiency optimization utilizing partition pruning, information manipulation, and a wealthy AWS ecosystem integration together with providers like Athena and Amazon EMR with extra seamless and feature-rich information processing expertise.
Undifferentiated heavy lifting
Information partitioning is one in every of main contributing components to optimizing combination throughput to and from Amazon S3, contributing to total Excessive Efficiency Computing (HPC) atmosphere price-performance.
Quant researchers usually face efficiency bottlenecks and sophisticated information administration challenges when coping with large-scale datasets in Amazon S3. As mentioned in Finest practices design patterns: optimizing Amazon S3 efficiency, single prefix efficiency is proscribed to three,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned Amazon S3 prefix. Iceberg’s metadata layer and clever partitioning methods robotically optimize information entry patterns, decreasing the probability of I/O throttling and minimizing the necessity for handbook efficiency tuning. This automation permits quant groups to give attention to growing and refining buying and selling methods slightly than troubleshooting information entry points or optimizing storage layouts.
On this part, we talk about conditions we found whereas operating our experiments at scale and options offered by Iceberg vs. vanilla Parquet when accessing information in Amazon S3.
As we talked about within the introduction, the character of quant analysis is “fail quick”—new concepts must be shortly evaluated after which both prioritized for a deep dive or dismissed. This makes it unattainable to provide you with common partitioning that works on a regular basis and for all analysis kinds.
When accessing information straight as Parquet recordsdata in Amazon S3, with out utilizing an open desk format like Iceberg, partitioning and throttling points can come up. Partitioning on this case is set by the bodily format of recordsdata in Amazon S3, and a mismatch between the supposed partitioning and the precise file format can result in I/O throttling exceptions. Moreover, itemizing directories in Amazon S3 also can lead to throttling exceptions because of the excessive variety of API calls required.
In distinction, Iceberg supplies a metadata layer that abstracts away the bodily file format in Amazon S3. Partitioning is outlined on the desk stage, and Iceberg handles the mapping between logical partitions and the underlying file construction. This abstraction helps mitigate partitioning points and reduces the probability of I/O throttling exceptions. Moreover, Iceberg’s metadata caching mechanism minimizes the variety of Listing API calls required, addressing the listing itemizing throttling challenge.
Though each approaches contain direct entry to Amazon S3, Iceberg is an open desk format that introduces a metadata layer, offering higher partitioning administration and decreasing the danger of throttling exceptions. It doesn’t act as a database itself, however slightly as an information format and processing engine on high of the underlying storage (on this case, Amazon S3).
One of the vital efficient strategies to handle Amazon S3 API quota limits is salting (random hash prefixes)—a way that provides random partition IDs to Amazon S3 paths. This will increase the likelihood of prefixes residing on completely different bodily partitions, serving to distribute API requests extra evenly. Iceberg helps this performance out of the field for each information ingestion and studying.
Implementing salting straight in Amazon S3 requires advanced customized code to create and use partitioning schemes with random keys within the naming hierarchy. This method necessitates a customized information catalog and metadata system to map bodily paths to logical paths, permitting direct partition entry with out counting on Amazon S3 Listing API calls. With out such a system, purposes danger exceeding Amazon S3 API quotas when accessing particular partitions.
At petabyte scale, Iceberg’s benefits grow to be clear. It effectively manages information via the next options:
- Listing caching
- Configurable partitioning methods (vary, bucket)
- Information administration performance (compaction)
- Catalog, metadata, and statistics use for optimum execution plans
These built-in options remove the necessity for customized options to handle Amazon S3 API quotas and information group at scale, decreasing improvement time and upkeep prices whereas enhancing question efficiency and reliability.
Efficiency
We highlighted lots of the performance of Iceberg that eliminates undifferentiated heavy lifting and improves developer and quant productiveness. What about efficiency?
This part evaluates whether or not Iceberg’s metadata layer introduces overhead or delivers optimization for quantitative analysis use instances, evaluating it with vanilla Parquet entry on Amazon S3. We study how these approaches influence widespread quant analysis queries and workflows.
The important thing query is whether or not Iceberg’s metadata layer, designed to optimize vanilla Parquet entry on Amazon S3, introduces overhead or delivers the supposed optimization for quantitative analysis use instances. Then we talk about overlapping optimization strategies, similar to information distribution and sorting. We additionally talk about that there isn’t a magic partitioning and all sorting scheme the place one measurement suits all within the context of quant analysis. Our benchmarks present that Iceberg performs comparably to direct Amazon S3 entry, with extra optimizations from its metadata and statistics utilization, just like database indexing.
Vanilla Parquet vs Iceberg: Amazon S3 learn efficiency
We created 4 completely different datasets: two utilizing Iceberg and two with direct Amazon S3 Parquet entry, every with each sorted and unsorted write distributions. The aim of this train was to match the efficiency of direct Amazon S3 Parquet entry vs. the Iceberg open desk format, making an allowance for the influence of write distribution patterns when operating numerous queries generally utilized in quantitative buying and selling analysis.
Question 1
We first run a easy rely question to get the entire variety of information within the desk. This question helps perceive the baseline efficiency for a simple operation. For instance, if the desk accommodates tick-level market information for numerous monetary devices, the rely can provide an concept of the entire variety of information factors out there for evaluation.
The next is the code for vanilla Parquet:
Question 2
Our second question is a grouping and counting question to search out the variety of information for every mixture of exchange_code
and instrument
. This question is often utilized in quantitative buying and selling analysis to investigate market liquidity and buying and selling exercise throughout completely different devices and exchanges.
The next is the code for vanilla Parquet:
The next is the code for Iceberg:
Question 3
Subsequent, we run a definite question to retrieve the distinct mixtures of yr, month, and day from the adapterTimestamp_ts_utc
column. In quantitative buying and selling analysis, this question may be useful for understanding the time vary coated by the dataset. Researchers can use this data to determine intervals of curiosity for his or her evaluation, similar to particular market occasions, financial cycles, or seasonal patterns.
The next is the code for vanilla Parquet:
The next is the code for Iceberg:
Question 4
Lastly, we run a grouping and counting question with a date vary filter on the adapterTimestamp_ts_utc
column. This question is just like Question 2 however focuses on a particular time interval. You possibly can use this question to investigate market exercise or liquidity throughout particular time intervals, similar to intervals of excessive volatility, market crashes, or financial occasions. Researchers can use this data to determine potential buying and selling alternatives or examine the influence of those occasions on market dynamics.
The next is the code for vanilla Parquet:
The next is the code for Iceberg. As a result of Iceberg has a metadata layer, the row rely may be fetched from metadata:
Take a look at outcomes
To guage the efficiency and price advantages of utilizing Iceberg for our quant analysis information lake, we created 4 completely different datasets: two with Iceberg tables and two with direct Amazon S3 Parquet entry, every utilizing each sorted and unsorted write distributions. We first ran AWS Glue write jobs to create the Iceberg tables after which mirrored the identical write processes for the Amazon S3 Parquet datasets. For the unsorted datasets, we partitioned the info by alternate
and instrument
, and for the sorted datasets, we added a form key on the time column.
Subsequent, we ran a sequence of queries generally utilized in quantitative buying and selling analysis, together with easy rely queries, grouping and counting, distinct worth queries, and queries with date vary filters. Our benchmarking course of concerned studying information from Amazon S3, performing numerous transformations and joins, and writing the processed information again to Amazon S3 as Parquet recordsdata.
By evaluating runtimes and prices throughout completely different information codecs and write distributions, we quantified the advantages of Iceberg’s optimized information group, metadata administration, and environment friendly Amazon S3 information dealing with. The outcomes confirmed that Iceberg not solely enhanced question efficiency with out introducing vital overhead, but additionally decreased the probability of activity failures, reruns, and throttling points, resulting in extra secure and predictable job execution, notably with giant datasets saved in Amazon S3.
AWS Glue write jobs
Within the following desk, we evaluate the efficiency and the associated fee implications of utilizing Iceberg vs. vanilla Parquet entry on Amazon S3, making an allowance for the next use instances:
- Iceberg desk (unsorted) – We created an Iceberg desk partitioned by
exchange_code
andinstrument
Because of this the info was bodily partitioned in Amazon S3 based mostly on the distinctive mixtures ofexchange_code
andinstrument
values. Partitioning the info on this manner can enhance question efficiency, as a result of Iceberg can prune out partitions that aren’t related to a selected question, decreasing the quantity of information that must be scanned. The information was not sorted on any column on this case, which is the default habits. - Vanilla Parquet (unsorted) – For this use case, we wrote the info straight as Parquet recordsdata to Amazon S3, with out utilizing Iceberg. We repartitioned the info by
exchange_code
andinstrument
columns utilizing normal hash partitioning earlier than writing it out. Repartitioning was essential to keep away from potential throttling points when studying the info later, as a result of accessing information straight from Amazon S3 with out clever partitioning can result in too many requests hitting the identical S3 prefix. Just like the Iceberg desk, the info was not sorted on any column on this case. To make comparability honest, we used the precise repartition rely that Iceberg makes use of. - Iceberg desk (sorted) – We created one other Iceberg desk, this time partitioned by
exchange_code
andinstrument
Moreover, we sorted the info on this desk on theadapterTimestamp_ts_utc
column. Sorting the info can enhance question efficiency for sure kinds of queries, similar to those who contain vary filters or ordered outputs. Iceberg robotically handles the sorting and partitioning of the info transparently to the consumer. - Vanilla Parquet (sorted) – For this use case, we once more wrote the info straight as Parquet recordsdata to Amazon S3, with out utilizing Iceberg. We repartitioned the info by vary on the
exchange_code
,instrument
, andadapterTimestamp_ts_utc
columns earlier than writing it out utilizing normal vary partitioning with 1996 partition rely, as a result of this was what Iceberg was utilizing based mostly on SparkUI. Repartitioning on the time column (adapterTimestamp_ts_utc
) was essential to realize a sorted write distribution, as a result of Parquet recordsdata are sorted inside every partition. This sorted write distribution can enhance question efficiency for sure kinds of queries, just like the sorted Iceberg desk.
Write Distribution Sample | Iceberg Desk (Unsorted) | Vanilla Parquet (Unsorted) | Iceberg Desk (Sorted) | Vanilla Parquet (Sorted) |
DPU Hours | 899.46639 | 915.70222 | 1402 | 1365 |
Variety of S3 Objects | 7444 | 7288 | 9283 | 9283 |
Dimension of S3 Parquet Objects | 567.7 GB | 629.8 GB | 525.6 GB | 627.1 GB |
Runtime | 1h 51m 40s | 1h 53m 29s | 2h 52m 7s | 2h 47m 36s |
AWS Glue learn jobs
For the AWS Glue learn jobs, we ran a sequence of queries generally utilized in quantitative buying and selling analysis, similar to easy counts, grouping and counting, distinct worth queries, and queries with date vary filters. We in contrast the efficiency of those queries between the Iceberg tables and the vanilla Parquet recordsdata learn in Amazon S3. Within the following desk, you’ll be able to see two AWS Glue jobs that present the efficiency and price implications of entry patterns described earlier.
Learn Queries / Runtime in Seconds | Iceberg Desk | Vanilla Parquet |
COUNT(1) on unsorted | 35.76s | 74.62s |
GROUP BY and ORDER BY on unsorted | 34.29s | 67.99s |
DISTINCT and SELECT on unsorted | 51.40s | 82.95s |
FILTER and GROUP BY and ORDER BY on unsorted | 25.84s | 49.05s |
COUNT(1) on sorted | 15.29s | 24.25s |
GROUP BY and ORDER BY on sorted | 15.88s | 28.73s |
DISTINCT and SELECT on sorted | 30.85s | 42.06s |
FILTER and GROUP BY and ORDER BY on sorted | 15.51s | 31.51s |
AWS Glue DPU hours | 45.98 | 67.97 |
Take a look at outcomes insights
These take a look at outcomes provided the next insights:
- Accelerated question efficiency – Iceberg improved learn operations by as much as 52% for unsorted information and 51% for sorted information. This velocity increase allows quant researchers to investigate bigger datasets and take a look at buying and selling methods extra quickly. In quantitative finance, the place velocity is essential, this efficiency acquire permits groups to uncover market insights sooner, probably gaining a aggressive edge.
- Decreased operational prices – For read-intensive workloads, Iceberg decreased DPU hours by 32.4% and achieved a ten–16% discount in Amazon S3 storage. These effectivity features translate to value financial savings in data-intensive quant operations. With Iceberg, companies can run extra complete analyses throughout the identical funds or reallocate sources to different high-value actions, optimizing their analysis capabilities.
- Enhanced information administration and scalability – Iceberg confirmed comparable write efficiency for unsorted information (899.47 DPU hours vs. 915.70 for vanilla Parquet) and maintained constant object counts throughout sorted and unsorted situations (7,444 and 9,283, respectively). This consistency results in extra dependable and predictable job execution. For quant groups coping with large-scale datasets, this reduces time spent on troubleshooting information infrastructure points and will increase give attention to growing buying and selling methods.
- Improved productiveness – Iceberg outperformed vanilla Parquet entry throughout numerous question varieties. Easy counts had been 52.1% sooner, grouping and ordering operations improved by 49.6%, and filtered queries had been 47.3% sooner for unsorted information. This efficiency enhancement boosts productiveness in quant analysis workflows. It reduces question completion instances, permitting quant builders and researchers to spend extra time on mannequin improvement and market evaluation, resulting in sooner iteration on buying and selling methods.
Conclusion
Quant analysis platforms usually keep away from adopting new information administration options like Iceberg, fearing efficiency penalties and elevated prices. Our evaluation disproves these issues, demonstrating that Iceberg not solely matches or enhances efficiency in comparison with direct Amazon S3 entry, but additionally supplies substantial extra advantages.
Our checks reveal that Iceberg considerably accelerates question efficiency, with enhancements of as much as 52% for unsorted information and 51% for sorted information. This velocity increase allows quant researchers to investigate bigger datasets and take a look at buying and selling methods extra quickly, probably uncovering helpful market insights sooner.
Iceberg streamlines information administration duties, permitting researchers to give attention to technique improvement. Its sturdy insert, replace, and delete capabilities, mixed with time journey options, allow easy administration of advanced datasets, enhancing backtest accuracy and facilitating speedy technique iteration.
The platform’s clever dealing with of partitioning and Amazon S3 API quota points eliminates undifferentiated heavy lifting, liberating quant groups from low-level information engineering duties. This automation redirects efforts to high-value actions similar to mannequin improvement and market evaluation. Furthermore, our checks present that for read-intensive workloads, Iceberg decreased DPU hours by 32.4% and achieved a ten–16% discount in Amazon S3 storage, resulting in vital value financial savings.
Flexibility is a key benefit of Iceberg. Its numerous interfaces, together with SQL, DataFrames, and programmatic APIs, combine seamlessly with present quant analysis workflows, accommodating various evaluation wants and coding preferences.
By adopting Iceberg, quant analysis groups acquire each efficiency enhancements and highly effective information administration instruments. This mixture creates an atmosphere the place researchers can push analytical boundaries, keep excessive information integrity requirements, and give attention to producing helpful insights. The improved productiveness and decreased operational prices allow quant groups to allocate sources extra successfully, in the end resulting in a extra aggressive edge in quantitative finance.
Concerning the Authors
Man Bachar is a Senior Options Architect at AWS based mostly in New York. He makes a speciality of helping capital markets clients with their cloud transformation journeys. His experience encompasses identification administration, safety, and unified communication.
Sercan Karaoglu is Senior Options Architect, specialised in capital markets. He’s a former information engineer and enthusiastic about quantitative funding analysis.
Boris Litvin is a Principal Options Architect at AWS. His job is in monetary providers business innovation. Boris joined AWS from the business, most not too long ago Goldman Sachs, the place he held quite a lot of quantitative roles throughout fairness, FX, and rates of interest, and was CEO and Founding father of a quantitative buying and selling FinTech startup.
Salim Tutuncu is a Senior Companion Options Architect Specialist on Information & AI, based mostly in Dubai with a give attention to the EMEA. With a background within the know-how sector that spans roles as an information engineer, information scientist, and machine studying engineer, Salim has constructed a formidable experience in navigating the advanced panorama of information and synthetic intelligence. His present function includes working carefully with companions to develop long-term, worthwhile companies utilizing the AWS platform, notably in information and AI use instances.
Alex Tarasov is a Senior Options Architect working with Fintech startup clients, serving to them to design and run their information workloads on AWS. He’s a former information engineer and is enthusiastic about all issues information and machine studying.
Jiwan Panjiker is a Options Architect at Amazon Net Companies, based mostly within the Better New York Metropolis space. He works with AWS enterprise clients, serving to them of their cloud journey to unravel advanced enterprise issues by making efficient use of AWS providers. Outdoors of labor, he likes spending time together with his family and friends, going for lengthy drives, and exploring native delicacies.