Jobs Description
Required skills (maximum of 6): - 1? Develop and optimize data processing jobs using PySpark to handle complex data transformations and aggregations efficiently.? Design and implement robust data pipelines on the AWS platform, ensuring scalability and efficiency(Databricks exposure will be an advantage)? Leverage AWS services such as EC2, S3, etc. for comprehensive data processing and storage solutions.? Expertly manage SQL database schema design, query optimization, and performance tuning to support data transformation and loading processes.? Design and maintain scalable and performant data warehouses, employing best practices in data modeling and ETL processes.? Utilize modern data platforms for collaborative data science, integrating seamlessly with various data sources ..Resource should be available for F2F Interview in IBM location based on account request and Day 1 reporting post OB. RTH-Y.