• As an AWS Data Engineer you will contribute to our Services practice and will have the below responsibilities:
• Work with technical development team and team lead to understand desired application capabilities.
• Continuously improve software engineering practices.
• Work within and across Agile teams to test and support technical solutions across a full-stack of development tools and technologies
• Candidate would need to do development using application development by lifecycles, & continuous integration/deployment practices.
• Working to integrate open source components into data-analytic solutions
• Working with vendors to enhance tool capabilities to meet enterprise needs
• Willingness to continuously learn & share learnings with others
• 5+ years of direct applicable experience with key focus:
o Python; AWS; Data Pipeline creation
• Develop code using Python, such as
o Developing data pipelines from various external data sources to internal data.
o Use of Python for extracting data from the design data base.
o Developing Python APIs as needed
• Good experience in writing Spark applications using Python and ScalaMinimum 3 years of hands on experience in Amazon Web Services including EC2, VPC, S3, EBS, ELB, Cloud-Front, IAM, RDS, Cloud Watch.
• Familiar with Spark Dataframes, Spark-SQL and RDD API of Spark for performing various data transformations and dataset building
• Able to interpret business requirements, analyzing, designing and developing application on AWS Cloud and ETL technologies
• Able to design and architect server less application using AWS Lambda, EMR, DynamoDB and Security Token Service (STS).
• Ability to leverage AWS data migration tools and technologies including Storage Gateway, Database Migration and Import Export services.
• Understands relational database design, stored procedures, triggers, user-defined functions, SQL jobs.
• Familiar with CI/CD tools e.g., Jenkins, UCD for Automated application deployments
• Understanding of OLAP, OLTP, Star Schema, Snow Flake Schema, Logical/Physical/Dimensional Data Modeling.
• Ability to extract data from multiple operational sources and load into staging, Data warehouse, Data Marts etc. using SCDs (Type 1/Type 2/ Type 3/Hybrid) loads.
• Familiar with Software Development Life Cycle (SDLC) stages in a Waterfall and Agile environment.
Nice to have:
• Knowledge of Databricks platform, job clusters and orchestration of Databricks jobs.
• Familiar with the use of source control management tools for Branching, Merging, Labeling/Tagging and Integration, such as GIT and SVN.
• Experience working with UNIX/LINUX environments
• Hand-on experience with IDEs such as Jupiter Notebook
Education & Certification
University degree or diploma and applicable years of experience
Required qualifications to be successful in this role:
- Cloud Computing
- Spark SQL
What you can expect from us:
Insights you can act on
While technology is at the heart of our clients’ digital transformation, we understand that people are at the heart of business success.
When you join CGI, you become a trusted advisor, collaborating with colleagues and clients to bring forward actionable insights that deliver meaningful and sustainable outcomes. We call our employees "members" because they are CGI shareholders and owners and owners who enjoy working and growing together to build a company we are proud of. This has been our Dream since 1976, and it has brought us to where we are today — one of the world’s largest independent providers of IT and business consulting services.
At CGI, we recognize the richness that diversity brings. We strive to create a work culture where all belong and collaborate with clients in building more inclusive communities. As an equal-opportunity employer, we want to empower all our members to succeed and grow. If you require an accommodation at any point during the recruitment process, please let us know. We will be happy to assist.
Ready to become part of our success story? Join CGI — where your ideas and actions make a difference.