PLN 19.000 - 23.000 + VAT B2B
At Idego Group, you’ll work with people who find pleasure in programming and have deep knowledge about variety of technologies. You’ll work for our clients worldwide and provide support in great software development, including IoT, machine learning and blockchain related projects.
We provide quality and give ourselves a lot of autonomy, common sense and general friendliness.
We are seeking a highly intelligent, energetic and ambitious team player to help the data engineering team with implementing automated processes to support new data sources integrations, expanding data transformation capabilities, enabling to programmatically monitor processes and data quality across multiple systems as well as facilitating data management reporting automation.
7 years relevant work experience
5+ years of experience developing in object oriented programming languages, preferably Python or Java
Highly Competent with Scala, Spark, the Spark Engine, and the Spark Dataframe API
Experience with development best practices under continuous integration, testing and deployment in an AGILE environment involving source control tools such as GIT
Familiarity with database tools, Integration Architecture, Data Integration, ETL, Business Intelligence concepts and Big-Data solutions
Expertise using SQL for acquiring and transforming data
Ability to maintain, refactor, improve, and test existing code to reduce technical debt
Knowledge and experience in integrations involving a variety of data providers
B.S. in Computer Science or related field and industry experience
Build scalable data processing pipelines in Spark
Debug Spark jobs and do performance tuning
Write unit and integration tests for all data processing code
Read specs and translate them into code and design documents
Perform code reviews and develop processes for improving code quality
Integrating data from various data sources (e.g CRMs, Messaging, Video conference, Telephony, Group Document management, Contract signature/management, etc); writing in-house data back into source systems
Reporting data quality and processes execution alerts
Performing data extraction, transformation, cleansing and loading between different data layers and environments
Consolidating and loading relevant data for data quality, metadata management, data consumption, UI usage and processes performance related dashboards
Experience with Kafka messaging and big data technologies such as Hadoop, HDFS, MongoDB
Familiarity with NoSQL
Experience with Spring XD, XT
Experience with shell scripting
REST/SOAP APIs in addition to the cloud
CRM Knowledge, Jenkins, Airflow
AWS S3, AWS Redshift, AWS ECS, AWS SQS
GSuite, Slack, Jira, Confluence, Git, and Github
We're passionate about the work we do: both on front-end and back-end :) We would like to know if you share this passion, so don't hesitate to show us (or let us click through) apps you've created.
Beautiful code? Sure, that's our benchmark! But we are a TEAM and every day we base on trust and responsibility towards each other - so the openness to speak about risks and challenges, as well as sharing funny stuff is highly respected!
We learn. Constantly. Actually, our purpose is to provide great solutions - so we expect from each other, and from you, the approach "I'll solve this problem" instead of "At least it works. Locally."
And, if you're reading this sentence right now, you probably already know that we may also expect fluency in English. :)