M+E Technology Job Board
Data Engineer, SalesforceIQ Analytics Team
Salesforce
Join a small and growing team building and operating new and existing data pipelines to deliver real-time business intelligence for Salesforce products and infrastructure. We are using the latest cloud technologies to build highly scalable data pipelines and data repositories
As a data engineer, you are comfortable writing data extractors and transformers to move high volume data via the data pipeline. You use open-source projects to accelerate building highly scalable and innovative solutions. You deploy your work to public cloud platforms such as AWS and instrument them to collect and analyze operational data.
Our team culture empowers you to take ownership of your features or components. This is a unique ground-floor opportunity for self-motivated individuals.
Key Responsibilities:
Design and implement highly efficient real-time and batch data pipeline
Maintain and speed up existing data pipeline as data size continues to grow
Build and maintain data warehouse schema for shifting business requirements
Work closely with product management and operation team to develop, test, deploy, and operate high quality software.
Required Skills:
2+ years software development experience with a distinguished track record on technically demanding projects
2-5 years of professional experience working with modern programming languages, such as Java, Scala or equivalent.
Ability to quickly learn new technologies and work effectively in a fast paced dynamic environment.
Passion for creating new products and services, including being comfortable with the ambiguity associated with designing new products.
Experience with open-source technologies and cloud platforms.
BS or MS in Computer Science or equivalent
Strong background in computer science and algorithms
Desired Skills:
Experience with Big Data technologies such as Kafka, Kinesis, Hadoop, Spark
Experience with relational and NoSQL databases a plus
Experience with stream processing, messaging semantics, scheduling algorithms, and distributed systems fundamentals
Experience with large-scale metrics and monitoring
Experience with performance testing, troubleshooting, and tuning
Salesforce
Join a small and growing team building and operating new and existing data pipelines to deliver real-time business intelligence for Salesforce products and infrastructure. We are using the latest cloud technologies to build highly scalable data pipelines and data repositories
As a data engineer, you are comfortable writing data extractors and transformers to move high volume data via the data pipeline. You use open-source projects to accelerate building highly scalable and innovative solutions. You deploy your work to public cloud platforms such as AWS and instrument them to collect and analyze operational data.
Our team culture empowers you to take ownership of your features or components. This is a unique ground-floor opportunity for self-motivated individuals.
Key Responsibilities:
Design and implement highly efficient real-time and batch data pipeline
Maintain and speed up existing data pipeline as data size continues to grow
Build and maintain data warehouse schema for shifting business requirements
Work closely with product management and operation team to develop, test, deploy, and operate high quality software.
Required Skills:
2+ years software development experience with a distinguished track record on technically demanding projects
2-5 years of professional experience working with modern programming languages, such as Java, Scala or equivalent.
Ability to quickly learn new technologies and work effectively in a fast paced dynamic environment.
Passion for creating new products and services, including being comfortable with the ambiguity associated with designing new products.
Experience with open-source technologies and cloud platforms.
BS or MS in Computer Science or equivalent
Strong background in computer science and algorithms
Desired Skills:
Experience with Big Data technologies such as Kafka, Kinesis, Hadoop, Spark
Experience with relational and NoSQL databases a plus
Experience with stream processing, messaging semantics, scheduling algorithms, and distributed systems fundamentals
Experience with large-scale metrics and monitoring
Experience with performance testing, troubleshooting, and tuning