Data Engineer

McDonalds
Full Time IL McDonald's, 600 N Clark St, Chicago, IL 60610, United States
Advertisement

Overview

Company Description:

McDonald's evolving Accelerating the Arches growth strategy puts our customers and people first and demonstrates our competitive advantages to strengthen our brand. We are recognized on lists like Fortune’s Most Admired Companies and Fast Company’s Most Innovative Companies.

Doubling Down on the 4Ds (Delivery, Digital, Drive Thru, and Development)

Our growth pillars emphasize the critical role technology plays as the best-in-class, global omni-channel restaurant brand. Technology enables the organization through digital technologies, and improving the customer, crew and employee experience each and every day!

Global Technology forging the way

Leading the digitization of our business is the Technology organization made up of innovation specialists who build industry defining tech using the latest innovations and platforms, like AI and edge computing to deliver on the next set of groundbreaking opportunities for the business. We take on technology innovation challenges at an incredible scale, and work across global teams who are always hungry for a challenge! This provides access to compelling career paths for technologists. It’s bonus points when you get to see your family and friends use the tech you build at their favorite McD restaurant.

Job Description:

McDonald’s Global Technology - Customer 360 team is looking to hire a Data Engineer/Site Reliability Engineer (SRE) with a deep understanding of data product lifecycle, standards, and practices. This position is where you’ll play a key role in designing, implementing, and maintaining robust infrastructure. Collaborate with development teams, automate processes, and ensure system reliability through proactive monitoring and incident response. Bring your expertise in scripting, cloud technologies, and containerization to optimize performance and contribute to our commitment to technological excellence.

Responsibilities:

  • Design, implement, and maintain scalable and reliable infrastructure.
  • Collaborate with development teams to enhance system architecture for optimal performance and stability.
  • Implement automation tools for continuous monitoring, testing, and deployment.
  • Respond to and resolve incidents, ensuring minimal downtime and quick recovery.
  • Conduct root cause analysis of reliability issues and implement preventive measures.
  • Optimize system performance and troubleshoot complex issues across the tech stack.
  • Collaborate on capacity planning and scalability initiatives.
  • Ensuring data security and compliance with data governance policies and regulations.
  • Stay updated on industry best practices and emerging technologies in site reliability.
  • Develops a solid understanding of the technical details of data domains and clearly understands what business problems are being solved.
  • Designing and developing data pipelines and ETL processes to extract, transform, and load data from various sources into AWS data storage solutions (e.g., S3, Redshift, Glue).
  • Implementing and maintaining scalable data architectures that support efficient data storage, retrieval, and processing.
  • Building and optimizing data integration workflows to connect data from different systems and platforms.
  • Managing data infrastructure on AWS, including capacity planning, cost optimization, and resource allocation.
  • Ability and flexibility to coordinate and work with teams distributed across time zones. For instance, early morning/late evening hours to coordinate with teams in India.

Qualifications:

  • Bachelor’s degree in computer science, related field, or equivalent experience.
  • Proven experience in a Site Reliability Engineer or similar role.
  • 5+ years of strong experience in data engineering, specifically with AWS backend tech stack, including but not limited to S3, Redshift, Glue, Lambda, EMR, and Athena.
  • 5+ years of hands-on experience with data modeling, ETL development, and data integration techniques.
  • 5+ years of proficiency in programming languages commonly used in data engineering, such as Python
  • Working knowledge of relational and dimensional data design and modeling in a large multi-platform data environment
  • Solid understanding of SQL and database concepts.
  • Expert knowledge of quality functions like cleansing, standardization, parsing, de-duplication, mapping, hierarchy management, etc.
  • Expert Knowledge of data, master data, and metadata-related standards, processes, and technology.
  • Ability to drive continuous data management quality (i.e., timeliness, completeness, accuracy) through defined and governed principles.
  • Ability to perform extensive data analysis (comparing multiple datasets) using a variety of tools.
  • Demonstrated experience in data management and data governance capabilities.
  • Familiarity with data warehousing principles and best practices.
  • Excellent problem solver - use of data and technology to solve problems or answer complex data-related questions.
  • Excellent communication and collaboration skills to work effectively in cross-functional teams.
  • Experience with E2E solutions using Kafka Real-time data processing.
  • Experience with Kafka implementation and optimizations

Preferred Requirements:

  • Experience with GCP
  • Experience with JIRA and Confluence as part of project workflow and documentation tools is a plus.
  • Experience with Agile project management methods and terminology a plus
  • Familiarity with CI/CD pipelines.
  • Experience with infrastructure as code
  • Experience performing Database Migrations
Advertisement