RippleCode off-campus recruitment 2022
This is informed to all the candidate who is jobseekers then have a great opportunity in RippleCode is announced hiring for the position of AWS Cloud Engineer those candidates who are waiting for a chance then apply now the link is available below.
NOTE: Before Filling out the Online Application form, read all the Mentioned Details Carefully and share this position.
Company: RippleCode
Position: AWS Cloud Engineer
Experience: 4-8 years
Location: Work From Home
Salary: Min 8-9 LPA (Expected)
Qualification: Any graduate and PostGraduate
Batch: 2015 2016 2017 2018 2019 2020 2021 2022
JOIN OUR TELEGRAM GROUP FOR MORE UPDATES
Job Description
AWS Data Engineer Responsibilities:
• Evaluating, developing, maintaining, and testing data engineering solutions for
Data Lake and advanced analytics projects.
• Working closely with the US Clients.
• Implement processes and logic to extract, transform, and distribute data across
one or more data stores from a wide variety of sources
• Distil business requirements and translate them into technical solutions for data systems
including data warehouses, cubes, marts, lakes, ETL integrations, BI tools, or other
components.
• Creation and support of data pipelines built on AWS technologies including
Glue, Redshift, EMR, Kinesis, and Athena
• Participate in deep architectural discussions to build confidence and ensure
customer success when building new solutions and migrating existing data
applications on the AWS platform.
• Optimize data integration platform to provide optimal performance under
increasing data volumes
• Support the data architecture and data governance function to continually
expand their capabilities
• Experience in the development of Solution Architecture for Enterprise Data Lakes
(applicable for AM/Manager level candidates)
• Should have exposure to client-facing roles
• Strong communication, interpersonal and team management skills
AWS Data Engineer Qualification Requirements:
• Bachelor’s Degree in Computer Science or a related technical field.
• Minimum 5 Years of Experience
• Excellent communication skills.
• Proficient in any object-oriented/ functional scripting languages: Java, Python,
Node etc.
• Experience in using AWS SDKs for creating data pipelines ingestion, processing
and orchestration.
• Hands-on experience in working with big data in an AWS environment including
cleaning/transforming/cataloguing/mapping etc.
• Good understanding of AWS components, storage (S3) & compute services
(EC2)
• Hands-on experience in AWS-managed services (Redshift, Lambda, Athena) and
ETL (Glue).
• Experience in migrating data from on-premise sources (e.g. Oracle, API-based,
data extracts) into AWS storage (S3)
• Experience in the setup of data warehouse using Amazon Redshift, creating Redshift
clusters and perform data analysis queries
• Experience in ETL and data modeling on AWS ecosystem components – AWS
Glue, Redshift, DynamoDB
• Experience in setting up AWS Glue to prepare data for analysis through
automated ETL processes
• Familiarity with AWS data migration tools such as AWS DMS, Amazon EMR, and
AWS Data Pipeline
• Hands-on experience with AWS CLI, Linux tools, and shell scripts
• Certifications on AWS will be an added plus.
How to Apply for this position?
Eligible and Interested candidates send their direct resumes to mentioned Email Id
Candidates can send their resumes to