Build maintain and optimize scalable data solutions to support Journey Analytics initiatives focusing on code maintainability reusable components and reliable data pipelines. This role is responsible for maintaining and refactoring existing codebases developing modular components and ensuring high-quality performant datasets for analytics and reporting use cases.
The ideal candidate has strong experience working with code repositories building data pipelines in Databricks and designing scalable data models to support evolving analytics needs in cross-functional environments.
Responsibilities
- Maintain optimize and automate existing code repositories in GitHub.
- Refactor legacy code to simplify maintenance updates and reuse across multiple use cases.
- Design and build modular reusable code components to support multiple journeys and reduce duplication.
- Develop and manage automated data pipelines in Databricks to support Journey Analytics datasets and downstream reporting.
- Consolidate key KPIs metrics and attributes into standardized data structures to enable flexible journey views.
- Build and maintain scalable data models to support current and future journey analytics use cases.
- Ensure data quality performance and reliability across data pipelines and analytics datasets.
- Collaborate with analytics and engineering teams to improve data processes and architecture.
Qualifications :
- Strong experience working with GitHub repositories and version control workflows.
- Hands-on experience developing and maintaining data pipelines in Databricks.
- Proven experience refactoring and maintaining legacy codebases.
- Strong understanding of data modeling and reusable component design.
- Experience building scalable data models for analytics and reporting use cases.
- Strong focus on data quality performance and reliability.
- Ability to work in cross-functional environments and contribute to continuous improvement.
- Ability to work independently and take ownership of initiatives after receiving high-level direction driving tasks forward with minimal supervision.
What about languages
- You will need excellent written and verbal English for clear and effective communication with the team.
How much experience must I have
- In order to thrive in this role you must have at least 5 years of experience in data engineering or similar roles.
Additional Information :
Our Perks and Benefits:
Learning Opportunities:
- Certifications in AWS (we are AWS Partners) Databricks and Snowflake.
- Access to AI learning paths to stay up to date with the latest technologies.
- Study plans courses and additional certifications tailored to your role.
- Access to Udemy Business offering thousands of courses to boost your technical and soft skills.
- English lessons to support your professional communication.
Travel opportunities to attend industry conferences and meet clients.
Mentoring and Development:
- Career development plans and mentorship programs to help shape your path.
Celebrations & Support:
- Special day rewards to celebrate birthdays work anniversaries and other personal milestones.
- Company-provided equipment.
Flexible working options to help you strike the right balance.
Other benefits may vary according to your location in LATAM. For detailed information regarding the benefits applicable to your specific location please consult with one of our recruiters.
So what are the next steps
Our team is eager to learn about you! Send us your resume or LinkedIn profile below and well explore working together!
Remote Work :
Yes
Employment Type :
Full-time
Build maintain and optimize scalable data solutions to support Journey Analytics initiatives focusing on code maintainability reusable components and reliable data pipelines. This role is responsible for maintaining and refactoring existing codebases developing modular components and ensuring high-q...
Build maintain and optimize scalable data solutions to support Journey Analytics initiatives focusing on code maintainability reusable components and reliable data pipelines. This role is responsible for maintaining and refactoring existing codebases developing modular components and ensuring high-quality performant datasets for analytics and reporting use cases.
The ideal candidate has strong experience working with code repositories building data pipelines in Databricks and designing scalable data models to support evolving analytics needs in cross-functional environments.
Responsibilities
- Maintain optimize and automate existing code repositories in GitHub.
- Refactor legacy code to simplify maintenance updates and reuse across multiple use cases.
- Design and build modular reusable code components to support multiple journeys and reduce duplication.
- Develop and manage automated data pipelines in Databricks to support Journey Analytics datasets and downstream reporting.
- Consolidate key KPIs metrics and attributes into standardized data structures to enable flexible journey views.
- Build and maintain scalable data models to support current and future journey analytics use cases.
- Ensure data quality performance and reliability across data pipelines and analytics datasets.
- Collaborate with analytics and engineering teams to improve data processes and architecture.
Qualifications :
- Strong experience working with GitHub repositories and version control workflows.
- Hands-on experience developing and maintaining data pipelines in Databricks.
- Proven experience refactoring and maintaining legacy codebases.
- Strong understanding of data modeling and reusable component design.
- Experience building scalable data models for analytics and reporting use cases.
- Strong focus on data quality performance and reliability.
- Ability to work in cross-functional environments and contribute to continuous improvement.
- Ability to work independently and take ownership of initiatives after receiving high-level direction driving tasks forward with minimal supervision.
What about languages
- You will need excellent written and verbal English for clear and effective communication with the team.
How much experience must I have
- In order to thrive in this role you must have at least 5 years of experience in data engineering or similar roles.
Additional Information :
Our Perks and Benefits:
Learning Opportunities:
- Certifications in AWS (we are AWS Partners) Databricks and Snowflake.
- Access to AI learning paths to stay up to date with the latest technologies.
- Study plans courses and additional certifications tailored to your role.
- Access to Udemy Business offering thousands of courses to boost your technical and soft skills.
- English lessons to support your professional communication.
Travel opportunities to attend industry conferences and meet clients.
Mentoring and Development:
- Career development plans and mentorship programs to help shape your path.
Celebrations & Support:
- Special day rewards to celebrate birthdays work anniversaries and other personal milestones.
- Company-provided equipment.
Flexible working options to help you strike the right balance.
Other benefits may vary according to your location in LATAM. For detailed information regarding the benefits applicable to your specific location please consult with one of our recruiters.
So what are the next steps
Our team is eager to learn about you! Send us your resume or LinkedIn profile below and well explore working together!
Remote Work :
Yes
Employment Type :
Full-time
View more
View less