Job Title: Data Engineer
Position: Full Time
Location: United States
DiMuto is a rising Agri Fintech company based in Singapore. Our mission is to use technology to transform global food supply chains for the better and tackle big issues like food waste and sustainability and the global trade finance gap.
DiMuto simplifies every step of global AgriFood trade. From produce, to trade, to market, our AgriFood Trade Solutions help growers, exporters, and importers to trade efficiently with better visibility and finance. Equipped with a data-backed growth roadmap, companies can now navigate the complex global trade landscape with ease and focus on what matters – growing a thriving international business.
With Visible Trade, DiMuto powers companies and the world forward with confidence.
Since 2019, DiMuto has successfully tracked and traced millions of pieces of produce and millions in dollars of trade value on the DiMuto platform, working with a global portfolio of clients in over ten countries on five continents.
What you will be doing:
- Develop ETL (extract, transform, load) processes to help extract and manipulate data from multiple sources.
- Work with both technical and non-technical stakeholders to generate insights from raw datasets.
- Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures.
- Monitor data systems performance and implement optimization strategies.
- Create interactive and visually appealing dashboards using data visualization tools to effectively communicate insights and trends.
What you should have:
- Passion for digging into data and generating insights.
- Able to communicate data trends/insights and work with the business team to make it actionable.
- Thrive in fast-paced environment and quick to adapt to growing business needs.
- Experience working with SQL and NoSQL databases including database design.
- Experience working with cloud Data Warehouse solutions such as Snowflake, Redshift, or Azure.
- Working knowledge of Cloud-based solutions such as AWS, Azure, or GCP.
- Experience with working on large data sets and distributed computing such as Hadoop, Spark, or MapReduce.
- Proficient in creating dashboards using data visualization tools such as Tableau, Power BI, or Looker.
To apply, please click here.