Data engineer
Procter and Gamble (P&G)’s Information Technology Department based in North York, ON is inviting applications from suitable candidates for the position of Data engineer. Procter and Gamble (P&G)’s Information Technology Department is a global consumer goods company specializing in personal care, household, and hygiene products. Its Information Technology (IT) Department delivers digital solutions, including cloud computing, cybersecurity, and AI-driven automation. P&G’s IT Department ensures seamless connectivity, market leadership, and sustainable growth. The candidates selected for the vacancy will be required to start the work as soon as possible.
Also hiring: Driver helper
Job Description:
Employer Name: Procter and Gamble (P&G)
Department: Information Technology
Position: Data engineer
No of Vacancies: 3
Salary: Salary is not mentioned, $65.00 – $70.00 hourly estimated salary
Employment Type: Full time
Job Category: Recent Grads/Entry Level
Location: North York, ON, Canada
Requisition ID: R000123701
Requirements:
Languages: Candidates must have knowledge of the English and French Language
Education: Candidates should have completion of a minimum of a bachelor’s degree from an accredited university
Experience: Candidates should have 1 to 3 years of experience after graduation from a post-secondary institution
Physical Requirements:
- The candidates should demonstrate the ability to think critically and solve problems
Other Requirements:
- The candidates should be able to conduct business in English (written and verbal) French and start the role in April 2025 from Toronto (domestic relocation benefits may apply)
- The candidates should possess strong leadership skills and strong written and verbal communication skills
- The candidates should have expertise in Azure Cloud and Big Data technologies to design, develop, and maintain scalable data pipelines and architectures
- The candidates should be able to leverage Azure services and big data frameworks to transform raw data into meaningful insights while ensuring data quality, security, and accessibility
- The candidates should have exposure to BI Development, as 10-15% of the role involves working with tools like Power BI to support business requirements
Responsibilities:
- The candidates should be able to design, build, and maintain ETL/ELT processes to ingest, process, and integrate data from various sources into Azure data services and utilize Azure Data Factory, Azure Databricks, and Azure Synapse Analytics to orchestrate data workflows
- The candidates should be able to implement big data solutions using Azure Data Lake Storage, Azure HDInsight, and other big data technologies and optimize data storage and retrieval techniques to ensure high performance and scalability
- The candidates should be able to develop and maintain data models, schemas, and metadata to support analytical and reporting and collaborate with data architects and analysts to understand data requirements and implement solutions accordingly and establish data quality metrics and monitoring frameworks to ensure data accuracy and integrity
- The candidates should be able to implement data governance practices to manage data access, security, and compliance and work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data needs and deliver actionable insights
- The candidates should be able to document data engineering processes, architecture designs, and best practices and monitor and optimize data processing workflows for performance and cost-efficiency
- The candidates should be able to troubleshoot and resolve data-related issues in a timely manner and stay updated on emerging technologies and trends in data engineering, big data, and cloud computing
- The candidates should be able to evaluate and recommend new tools and technologies that align with business and demonstrate exposure to front-end Business Intelligence (BI) development
- The candidates should be able to work with tools like Power BI and JavaScript for BI/Web app development and utilize Microsoft Azure Cloud Services in relation to data engineering
- The candidates should be able to use Python (primarily PySpark) for data processing and apply object-oriented programming principles and use version control tools such as GitHub and Azure DevOps
- The candidates should be able to work with Power BI and other visualization tools aligned with Microsoft Azure and write DAX queries, implement incremental data processing, develop semantic models, and design star schemas and surrogate keys (SKIDs)
How to apply:
If the position is fit for you and the basic requirements are fulfilled then you can now apply directly to the employer (along with your resume) through the below-mentioned details.
We thank all the applicants for showing their interest and trust in us, however, only the most eligible candidates will be selected and conducted for further procedure directly from employers, in between no charges/fees or original documents will be asked from any applicant. All the best!