Data Engineer (PySpark) - Contract

  • Australia
  • Sydney
  • Contract
  • Negotiable

Duration: 3-6 months (with potential for extension)

Location: Sydney

Bill Rate: $800 – $1000 per day

Citizenship Requirements: None

About the Role:

Join a leading global consultancy known for delivering high-impact solutions to some of the world’s largest and most prestigious clients. We are seeking an experienced Data Engineer (specializing in PySpark development) for a 3-6 month contract, with the potential for extension. This role offers an incredible opportunity to work on data engineering projects that drive transformation across industries, particularly in data-intensive domains.

As a Data Engineer with a focus on PySpark and Python, you will help design, build, and optimize robust data pipelines and solutions for complex datasets. You’ll collaborate with cross-functional teams to ensure the efficient and scalable delivery of data solutions for high-profile clients across the globe.

Key Responsibilities:

  • Design, develop, and maintain efficient data pipelines using PySpark and Python to handle large-scale datasets.
  • Transform and analyze complex data to provide actionable insights that drive business decision-making.
  • Work closely with other data engineers, analysts, and business teams to understand requirements and build data solutions that meet client needs.
  • Optimize data processes for performance and scalability, ensuring high-quality results.
  • Assist in deploying data solutions within cloud environments.
  • Contribute to developing data architecture and follow industry best practices in data management and engineering.

Required Skills and Experience:

  • Solid experience working with PySpark to process and analyze large datasets in a distributed environment.
  • Strong proficiency in Python for data engineering tasks and automation.
  • Proven experience working with large-scale data processing frameworks.
  • Familiarity with cloud technologies and platforms such as AWS, GCP, or Azure.
  • Strong problem-solving and troubleshooting skills, with the ability to adapt to new tools and techniques.
  • Excellent communication skills, with the ability to collaborate effectively within a fast-paced, dynamic environment.

Nice to Have:

  • Previous experience in the banking or financial services sector is highly desirable, as the role will involve working on high-impact data solutions for global financial clients.
  • Knowledge of additional big data technologies, including Hadoop, Spark, or Flink, would be beneficial.

Why Join Us:

  • Work for a leading global consultancy with a strong reputation for delivering excellence.
  • Opportunity to contribute to high-profile projects that shape the future of data engineering in the global marketplace.
  • Collaborate with a team of top-tier professionals on innovative, cutting-edge data solutions.
  • Competitive daily rate offering excellent earning potential.

If you’re a talented Data Engineer with expertise in PySpark and looking for an exciting contract opportunity with a global consultancy, we want to hear from you.

Apply now

Submit your details and attach your resume below. Hint: make sure all relevant experience is included in your CV and keep your message to the hiring team short and sweet - 2000 characters or less is perfect.