
Thrilling Alternative: Cognitive Automation Function
Are you able to take your expertise in Python, PySpark, and Large Knowledge to the subsequent stage? Be a part of a dynamic workforce the place innovation meets expertise, and construct your profession in Cognitive Automation (AI & ML)!
Function Overview:
This position affords the proper alternative for professionals with 1-3 years of expertise in Cognitive Automation to work on chopping-edge initiatives, leveraging superior instruments and applied sciences like PySpark, Hive, IMPALA, and Tableau.
Key Tasks:
✅ Design, develop, and preserve information pipelines for numerous sources.
✅ Collaborate with stakeholders to ship information-pushed options for resolution-making.
✅ Implement and optimize information warehouses, information lakes, and ETL processes.
✅ Guarantee information high quality, safety, and scalability.
✅ Create impactful information visualizations and experiences.
✅ Leverage instruments like Apache Airflow and cloud platforms corresponding to AWS, Azure, or GCP.
What You’ll Want:
🔹 Proficiency in Python and SQL with information of coding requirements.
🔹 Palms-on expertise with PySpark and Large Knowledge processing frameworks (Hadoop, Spark).
🔹 Familiarity with NoSQL databases like MongoDB or Cassandra (good to have).
🔹 Sturdy understanding of information modeling, information structure, and orchestration frameworks.
🔹 Data of instruments like JIRA, Git & Bitbucket.
Why Be a part of Us?
✨ Work on transformative AI & ML initiatives.
✨ Collaborate with skilled mentors within the trade.
✨ Keep up to date with the newest instruments and traits in Knowledge Engineering.
✨ Contribute to impactful options that drive enterprise success.
Take the leap into an revolutionary position that blends Cognitive Growth, AI, and Large Knowledge. Your experience could make a big distinction!
🔗 Apply now and change into part of this thrilling journey.
Cognitive Automation (AI & ML) 1-3 yrs • Proficiency in programming languages corresponding to Python with correct coding requirements. • Sturdy information in PySpark, IMPALA, HIVE & Tableau. • Sturdy expertise with SQL and relational databases (e.g., PostgreSQL, MySQL, and so forth.). • Good to have : Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). • Data of information modeling, information warehousing, and information structure. • Expertise with large information processing frameworks (e.g., Hadoop, Spark). • In-depth information of orchestration and scheduling jobs for information engineering pipelines with Apache Airflow. • Expertise with Cloud-based mostly information options (e.g., AWS, Azure, GCP) is advantageous. • Expertise with JIRA, GIT & Bitbucket. • Accountable and self-pushed work within the subject of information engineering. • Design, develop, and preserve information pipelines and ETL processes to ingest, course of, remodel and retailer massive volumes of information from numerous sources. • Collaborate with the enterprise stakeholders to know their information necessities and supply information-pushed options and insights for resolution making. • Optimize and tune information pipelines and database efficiency for scalability, effectivity, and reliability. • Implement and preserve information warehouses, information lakes, and different information storage options for environment friendly information retrieval and evaluation. • Guarantee information high quality, integrity, and safety all through the information lifecycle. • Create information visualizations and experiences to speak findings successfully to technical and non-technical audiences. • Keep up to date with the newest information engineering, instruments, and greatest practices. • Curiosity in Knowledge (Evaluation, Schema, Validation, Visualization) • Ought to be capable of deal with each large information and small information.