Manipulating Delta Lake tables on MinIO with Trino

Vasileios Anagnostopoulos
5 min readMar 2, 2023

--

An educational Delta Lakehouse

Introduction

Usually setting Hadoop correctly from scratch can be a real challenging exercise. People usually get ready made images or need to access big Cloud deployments. The problem with these approaches is that for the former, you usually get outdated versions or you lose the deep insight of faking it till you make it. For the latter you need to pay for it or provide your credit card for the rare case you over consume. Both approaches are not ideal especially for educational purposes. Usually experimenting / learning Apache Spark also comes with same requirement of having a Hadoop deployment somewhere. Spark can start instances by re-using YARN of the deployment or by running in a completely independent cluster. But in all cases the HDFS needs to be there. Fortunately there is another option.

Trino and MinIO

With the disaggregated storage movement people have been starting moving away from awkward big Hadoop Deployments. There are many benefits that we outlined in this previous article in using MinIO instead of Hadoop. The interested reader can have a look in this reference. It is highly recommended. Because this is educational material and we like to experiment, this time we will not use Spark. Instead we will use Trino for the computational part. Trino is an excellent option for running distributed computations over a distributed file storage in the spirit of Apache. It skips entirely the custom computational part with libraries and custom code. It provides directly an ANSI SQL interface as a less code solution. Also it is very good in running ad-hoc queries. In this respect we can run a Trino-MinIO combination as an alternative to BigQuery for example.

Tweaking the state of art

It turns out that we were not the first to examine this approach. I used this excellent post (called the Iceber post from now on) as a way to create a setup which is a pre-requisite. If you have not read it, please do. It uses a ready made image for Hive Metastore, a standard MinIO image and a Trino Coordinator. Instead I went forward to the image source code and took the parts I needed. I opted for a Maria DB acting as the transactional part of the metastore. Since it is an educational exercise and need to run with latest versions using a custom docker image allowed me to do so. For MinIO I used that image. People can use the image distributed from the MinIO project of course. I also avoided using the mc utility. I did not need it here. Putting all together give this docker-compose file

The detailed final result is in my repository. Grab its contents and navigate to the repository folder. Now from inside the folder run

docker-compose up

and you are ready to go. In order to connect with the Trino coordinator, download the Trino client and follow the instructions. The Trino interaction as outlined in the Iceberg post did not work out. Possibly because I use newer versions. So I post here the modified result.

First in the Trino repl, make sure the catalogs you are interested, hive in this case, are in place. In the repo we have a hive catalog that talks directly with MinIO. Here is a screenshot from my laptop:

Catalogs in Trino

Now we need to create an “iris” bucket from the MinIO browser based tool running in

http://localhost:9001

See docker compose file for the necessary credentials.

Start the Trino REPL:

java -jar trino-cli-407-executable.jar — server http://127.0.0.1:8080

Now we can create the iris schema in a similar way.

and here is the execution

There is an object now in MinIO.

One object created

Let’s fill this initially empty object with data:

We get:

Table is filled with data
Table now has data

If you navigate in the minio-data folder, you will eventually reach the parquet file created.

Bringing Delta Lake into the picture

Now it is time to deviate from the Iceberg post and move to Delta Lake. Delta Lake is a different beast that offers optimistic concurrency control. There is a nice comparison here.

Time to get us our hands wet. Shut down the containers. We need to create a catalog for the Delta Lake schema. Pay attention to the last property. Without that s3 does not support writes because of potential file system corruption.

We can create again a Delta Lake table

and populate it

Populate delta table.

We made the mistake of double creating the schema. Delta Lake has now 2 schemas! For the Hive schema this is not possible. It just fails.

Double schema creation

Not only that, but Delta Lake tracks our 3 insertions (we do 3 distinct populations, on purpose this time 😉) on our latest schema.

3 version bumps in Delta lake

This is exactly why we love Delta Lake. While we see 3 equal chunks, afterall we add 10 rows each time. If we query the table, we get from the latest schema version the whole 30 lines.

Conclusion

We demonstrated the ability of Delta Lake to do optimistic concurrency control. In order to view this capability in our local environment, we created a mini-system of Trino, MinIO and Hive metastore. There is a repository for your own experiments. I hope you enjoyed the journey as much as me. Feel free to download, play and improve the code. Whatever improvements you think of, feel free to suggest. This is educational material and I hope you gained something out of it.

--

--