We are looking for engineers with real passion for distributed computing with actual hands-on experience developing data application on PySpark. You would be required to work with our data science team on development of several data applications.
Mandatory
1. Must be able to fetching data from data sources (databases, APIs, flat files, etc.)
2. Must know in-and-out of functional programming in Python with strong flair for data structures, linear algebra, algorithms implementation
3. Must be able to convert, break, distribute existing Python codes to functional programming syntax
4. Must have worked on atleast one real world project in production on PySpark
5. Must have implemented complex mathematical logics through PySpark at scale on parallel/distributed clusters
6. Must be able to recognize code that is more parallel, and less memory constrained, and you must show how to apply best practices to avoid runtime issues and performance bottlenecks
7. Must have worked on high degree of performance tuning, optimization, configuration, scheduling in PySpark
8. Must have integrated APIs, streams, databases, files (JSON, XML, CSV etc) through PySpark
Preferred
1. Good to have working knowledge vinaigrette of first-class, high order, pure functions, recurisons, lazy evaluations, and immutable data structures
2. A firm understanding of the underlying mathematics will be needed to adapt modelling techniques to fit the problem space with large data (1M records)
3. Good to have worked on PySpark MLlib and PySpark ML
4. Configured Checkpointing and Directed Acyclic Graphs (DAG) on PySpark cluster
5. Worked on development of data platform