Epam Systems in Gdansk was created as a second branch in Poland in 2014. Currently is one of the largest companies in the region and employs around 500 Engineers. Due to the dynamic development of the branch, we are looking for Software Developers and Testers at every stage of seniority.
We have already hired some great people from Brazil and we are looking for new hires!
We’re seeking Big Data Software Engineer which will take part in a data representation transformation in a complex project related to Health insurance domain. Project will be around restructuring and reengineering current approach with relational databases to a solution with Big Data approach. This should allow deeper data analysis and insights related to company services. Project would be big, complicated and interesting from architecture and technical point of view.
- Relocation package that covers the costs of Visa, flights and other costs related to the relocation of you and your family.
- Support for our relocation team in the formalities related to applying for a visa and relocation.
- Full-Time job in stable and great organized work environment.
- Language classes (English and Polish).
- Vast opportunities for self-development: online courses and library, experience exchange with colleagues around the world, partial grant of certification.
Career development center.
- Benefits package (health care, multisport, lunch tickets, petrol vouchers, and shopping vouchers, etc.).
- Sponsored sports activities, E-sport program, Epam Sailing Team.
Building out Data Management & Analytics platform to help Customer simplify data analytics and extract meaningful information out of the data and realize measurable value from it. Along with Predictive Modeling the applicable information mined from the raw data supports improvements in the quality of care and cost efficiency for the company. Ultimate goal of the program is building healthier individuals, healthier families and healthy communities thru the analytics platform delivered by this team
- Building big data pipelines for batch and real-time data processing ;
- Developing, loading and running predictive models in machine learning platforms;
- Building and running data analytics environments;
- Analyzing existing ETLs and design new solution based on Spark ;
- Collect, process and cleanse data from a wide variety of sources. Transform and convert unstructured data set into structured data for algorithm input;
- Evaluate the effectiveness of user experiences, determining what data is needed and how to collect it;
- integrate ML models into production;
- Design and use validation tools to compare results of original and new solution;
- Building and maintaining a Hadoop or Spark cluster, together with the many other tools that are part of the ecosystem: databases (such as Hive and HBase), streaming data platforms (Kafka, Spark Streaming etc)
- Apache Spark (Core, Streaming, SQL);
- Apache Hadoop Hive, HBase, HDFS;
- CI/CD (Docker, Jenkins, Git);
- Various machine learning tools;
- SQL and noSQL DBs;
- Some REST API web-services;
- Some Kafka, IBM MQ;
OUR RECRUITMENT PROCESS
- Sent your CV in English
- General Interview with IT Recruiter via Skype (10 min. technical test)
- Technical Interview with Technical Interviewer via Skype.
- Manager Interview with Hiring Manager via Skype.
- Customer Interview via Skype (optional).
- Official Offer.
- Full support with relocation to Gdansk.