Connecting opportunities
DBS Bank

Big Data Engineer

  • DBS Bank , singapore - Singapore
  • 7000 Competitive
  • Posted: 09 Sep 2019

Job Description

The Opportunity

  • We’re looking for effervescent proponents of our vision who are driven to understand the impact of technology across varying levels all over the world to be part of a major transformation programme to reimagine banking. 
  • Based in Singapore, you’ll be leveraging the latest technology to develop mission critical applications from scratch. You will be part of the strategic transformation programme which is engaging in UX, design thinking, Agile, cloud technologies, microservices and Big Data.

Business Function

  • Group Technology and Operations (T&O) enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group T&O, we manage the majority of the Bank's operational processes and inspire to delight our business partners through our multiple banking delivery channels.


  • Design and implement key components for highly scalable, distributed data collection and analysis system built for handling petabytes of data in the cloud. 
  • Work with architects from other divisions contributing to this analytics system and mentor team members on best practices in backend infrastructure and distributed computing topics. 
  • Analyze source data and data flows, working with structured and unstructured data.
  • Manipulate high-volume, high-dimensionality data from varying sources to highlight patterns, anomalies, relationships and trends
  • Analyze and visualize diverse sources of data, interpret results in the business context and report results clearly and concisely.
  • Apply data mining, NLP, and machine learning (both supervised and unsupervised) to improve relevance and personalization algorithms.
  • Work side-by-side with product managers, software engineers, and designers in designing experiments and minimum viable products.
  • Build and optimize classifiers using machine learning techniques and enhance data collection procedures that is relevant for building analytic systems.
  • Discover data sources, get access to them, import them, clean them up, and make them “model-ready”. You need to be willing and able to do your own ETL.
  • Create and refine features from the underlying data. You’ll enjoy developing just enough subject matter expertise to have an intuition about what features might make your model perform better, and then you’ll lather, rinse and repeat;
  • Run regular A/B tests, gather data, perform statistical analysis, draw conclusions on the impact of your optimizations and communicate results to peers and leaders.

Job Requirements

  • More than 8 years of software engineering experience, with great exposure in big data and machine learning
  • The ability to work with loosely defined requirements and exercise your analytical skills to clarify questions, share your approach and build/test elegant solutions in weekly sprint/release cycles.
  • Strong experience in managing a mid and large scale team (offshore and onshore)
  • Development experience in Java/Scala and pride in producing clean, maintainable code
  • Practical experience in clustering high dimensionality data using a variety of approaches
  • Real world experience in solving business problems by deploying one or more machine learning techniques
  • Experience creating pipelines to analyze data, extracted features and updated models in production.
  • Independence and self-reliance while being a pro-active team player with excellent communication skills.
  • Hands-on development with key technologies including Scala, Spark, and other relevant distributed computing languages, frameworks, and libraries. 
  • Experience with distributed databases, such as Cassandra, and the key issues affecting their performance and reliability. 
  • Experience using high-throughput, distributed message queueing systems such as Kafka.
  • Familiarity with operational technologies, including Docker (required), Chef, Puppet, ZooKeeper, Terraform, and Ansible (preferred). 
  • An ability to periodically deploy systems to on-prem environments. 
  • Mastery of key development tools such as GIT, and familiarity with collaboration tools such as Jira and Confluence or similar tools. 
  • Experience with Teradata SQL, Exadata SQL, T-SQL
  • Strong experience in graph and stream processing
  • Experience in migrating SQL from traditional RDBMS to Spark and BigData technologies
  • Experience in building language parsers using ANTLR, query optimizers and automatic code generation
  • In-depth knowledge of database internals and Spark SQL Catalyst engine

Additional Information

About DBS Bank

DBS is a leading financial services group in Asia with a presence in 18 markets. Headquartered and listed in Singapore, DBS has a growing presence in the three key Asian axes of growth: Greater China, Southeast Asia and South Asia.

 Recognised for its global leadership, DBS has been named “Global Bank of the Year” by The Banker and “Best Bank in the World” by Global Finance. The bank is at the forefront of leveraging digital technology to shape the future of banking, having been named “World’s Best Digital Bank” by Euromoney. In addition, DBS has been accorded the “Safest Bank in Asia” award by Global Finance for ten consecutive years from 2009 to 2018.

 As a company of 27,000 innovators, we constantly challenge conventions and stereotypes of a traditional bank. Digital transformation pervades every part of the bank and our leaders inspire employees to challenge the status quo, innovate and reimagine banking. Employees are empowered with opportunities to experiment and come up with innovative solutions that will help transform customer experiences, so that they can ‘Live more, Bank less’.

 Discover how you can live fulfilled at DBS – where you can be the best, be the change and be the difference.

Big Data Engineer

Lazy Load...