At Dimona, a number one Latin American attire firm based 55 years in the past in Brazil, our enterprise is t-shirts. We design them, manufacture them, and promote them to customers on-line and thru our 5 retail shops in Rio de Janeiro. We additionally provide B2B corporations for his or her clients in Brazil and the US.

We’ve come a great distance since 2011 after I joined Dimona to launch our first web site. At this time, our API allows our B2B clients to add {custom} designs, and mechanically route orders from their e-commerce websites to us. We then make the shirts on demand and ship them in as little as 24 hours.
Each APIs and fast-turnaround drop transport have been main improvements for the Latin American attire trade, and it enabled us to develop in a short time. At this time, we have now greater than 80,000 B2B clients provided by our factories in Rio de Janeiro and South Florida. We will dropship on behalf of our B2B clients anyplace in Brazil and the U.S. and assist them keep away from the effort and price of import taxes.
Our enterprise is prospering. Nevertheless, we virtually didn’t get right here because of rising pains with our information know-how.

Off-the-Shelf ERP Techniques Too Restricted
On account of our vertically-integrated enterprise mannequin, our provide chain is longer than most clothes makers. We have to monitor uncooked material because it arrives in our factories, the t-shirts as they transfer by the chopping, stitching and printing phases, and the completed merchandise as they journey from manufacturing facility to warehouse to retail retailer or mail service earlier than lastly reaching clients.
Not solely is our provide chain longer than regular, so is the dimensions and variety of our stock. We’ve as much as a million t-shirts in inventory relying on the season. And because of the many {custom} designs, colours, materials and sizes that we provide, the variety of distinctive objects can be larger than different attire makers.
We tried many off-the-shelf ERP programs to handle our stock end-to-end however nothing proved as much as the duty. Specifically, limitations in these programs meant we may solely retailer the end-of-day stock counts by location, reasonably than a full document of every particular person merchandise because it traveled by our provide chain.
Monitoring solely stock counts minimized the quantity of knowledge we needed to retailer. Nevertheless, it additionally meant that once we tried to match these counts with the stock actions we did have on file, mysterious errors emerged that we couldn’t reconcile. That made it arduous for us to belief our personal stock information.

MySQL Crumbles Underneath Analytic Load
In 2019, we deployed our personal custom-built stock administration system to our most important warehouse in Rio de Janeiro. Having had expertise with AWS, we constructed our stock administration system round Amazon Aurora, AWS’s model of MySQL-as-a-service. Quite than simply document end-of-day stock totals, we recorded each stock motion utilizing three items of knowledge: the merchandise ID, its location ID, and the amount of that merchandise at that location.
In different phrases, we created a ledger that tracked each t-shirt because it moved from uncooked material to completed items into the fingers of a buyer. Each single barcode scan was recorded, whether or not it was a pallet of t-shirts shipped from the warehouse to a retailer, or a single shirt moved from one retailer shelf to a different.
This created an explosion within the quantity of knowledge we have been gathering in actual time. Immediately, we have been importing 300,000 transactions to Aurora each two weeks. Nevertheless it additionally enabled us to question our information to find the precise location of a selected t-shirt at any given time, in addition to view high-level stock totals and developments.
At first, Aurora was capable of deal with the duty of each storing and aggregating the information. However as we introduced extra warehouses and shops on-line, the database began bogging down on the analytics facet. Queries that used to take tens of seconds began taking greater than a minute or timing out altogether. After a reboot, the system could be high quality for a short time earlier than changing into sluggish and unresponsive once more.

Pandemic-Led Growth
Compounding the problem was the COVID-19’s arrival in early 2020. Immediately we had many worldwide clients clamoring for a similar drop cargo providers we offered in Brazil in different markets. In mid-2020, I moved to Florida and opened our U.S. manufacturing facility and warehouse.
By that time, our stock administration system had slowed right down to the purpose of being unusable. And our payments from doing even easy aggregations in Aurora have been by the roof.
We have been confronted with a number of choices. Going again to an error-ridden inventory-count system was out of the query. Another choice was to proceed recording all stock actions however use them solely to double-check our separately-tracked stock counts, reasonably than producing our stock totals from the motion data themselves. That will keep away from overtaxing the Aurora database’s meager analytical capabilities. However it could pressure us to take care of two separate datasets – datasets that must be consistently in contrast in opposition to one another with no assure that it could enhance accuracy.
We wanted a greater know-how answer, one that would retailer large information units and question them in quick, automated methods in addition to make fast, easy information aggregations. And we wanted it quickly.

Discovering Our Resolution
I checked out a number of disparate choices. I thought-about a blockchain-based system for our ledger earlier than shortly dismissing it. Inside AWS, I checked out DynamoDB in addition to one other ledger database provided by Amazon. We couldn’t get DynamoDB to ingest our information, whereas the ledger database was too uncooked and would have required an excessive amount of DIY effort to make work. I additionally checked out Elasticsearch, and got here to the identical conclusion – an excessive amount of {custom} engineering effort to deploy.
I discovered about Rockset from an organization that additionally was seeking to exchange query-challenged Aurora with a sooner managed cloud various.
It took us simply two months to check and validate Rockset earlier than deploying it in September 2021. We continued to ingest all of our stock transactions into Aurora. However utilizing Amazon’s Database Migration Service (DMS), we now repeatedly replicate information from Aurora into Rockset, which does the entire information processing, aggregations and calculations.
“The place Rockset actually shines is its skill to ship exact, correct views of our stock in near-real time.”
– Igor Blumberg, CTO, Dimona
This connection was extraordinarily straightforward to arrange because of Rockset’s integration with MySQL. And it’s quick: DMS replicates updates from a million+ Aurora paperwork to Rockset each minute, changing into obtainable to customers immediately.
The place Rockset actually shines is its skill to ship exact, correct views of our stock in near-real time. We use Rockset’s Question Lambda functionality to pre-create named, parameterized SQL queries that may be executed from a REST endpoint. This avoids having to make use of utility code to execute SQL queries, which is less complicated to handle and monitor efficiency, in addition to safer.
Utilizing Rockset’s Question Lambdas and APIs additionally shrank the quantity of knowledge we wanted to course of. This accelerates the velocity at which we are able to ship solutions to clients searching our web site, and to retailer staff and company employees internally looking out our stock administration system. Rockset additionally fully eradicated database timeouts.

Rockset additionally offers us full confidence within the ongoing accuracy of our stock administration system with out having to consistently double-check in opposition to each day stock counts. And it permits us to trace our provide chain in actual time and predict potential spikes in demand and shortages.
Rockset has been in manufacturing for us for greater than half a 12 months. Although we aren’t but leveraging Rockset’s capabilities in complicated analytics or deep information explorations, we’re greater than happy with the close to real-time, highly-accurate views of our stock we have now now – one thing that MySQL couldn’t ship.
Sooner or later we’re pondering of monitoring DMS to protect in opposition to hiccups or replication errors, although there have been none to this point. We’re additionally contemplating utilizing Rockset’s APIs to create objects as we ingest stock transactions.
Rockset has had an enormous impact on our enterprise. Its velocity and accuracy give us unprecedented visibility into our stock and provide chain, which is mission crucial for us.
Rockset helped us thrive throughout Black Friday and Christmas 2021. For the primary time, I used to be capable of get some sleep through the vacation season!
“Rockset offers us full confidence within the ongoing accuracy of our stock administration system with out having to consistently double-check in opposition to each day stock counts. And it permits us to trace our provide chain in actual time and predict potential spikes in demand and shortages.”
– Igor Blumberg, CTO, Dimona

Rockset is the real-time analytics database within the cloud for contemporary information groups. Get sooner analytics on brisker information, at decrease prices, by exploiting indexing over brute-force scanning.