Berkley

Principal Engineer - AI Platform

Location Name DE, Wilmington
ID
2025-12459
Date Posted
19 hours ago(6/18/2025 6:51 PM)
Company
Berkley Technology Services LLC
Primary Location
US-DE-Wilmington
Category
Information Technology

Company Details

bts 2022 USE THIS ONE

 

Company URL: https://www.berkleytechnologyservices.com/                                        

 

Berkley Technology Services (BTS) is the dynamic technology solution for W. R. Berkley Corporation, a Fortune 500 Commercial Lines Insurance Company. With key locations in Urbandale, IA and Wilmington, DE, BTS provides innovative and customer-focused IT solutions to the majority of WRBC’s 60+ operating units across the globe. BTS’s wide reach ensures that ideas and opinions are considered at every level of the organization to guarantee we find the best solutions possible.  

 

Driven by a commitment to collaboration, BTS acts as consultants to our customers and Operating Units by providing comprehensive solutions that not only address the challenge at hand, but proactively plan for the “What’s Next” in our industry and beyond.  

 

With a culture centered on innovation and entrepreneurial spirit, BTS stands as a community of technology leaders with eyes toward the future -- leaders who truly care about growing not only their team members, but themselves, and take pride in their employees who shine. BTS offers endless ways to get involved and have the chance to grow your career into a wide range of roles you'd never known existed. Come join us as we push forward into the future of industry-leading technological solutions.  

 

Berkley Technology Services: Right Team, Right Technology, Simple and Secure.  

Responsibilities

The AI Platform, Principal Engineer is responsible for designing, building, and maintaining robust AI platforms that support advanced machine learning and artificial intelligence applications. This role works closely with data scientists, software engineers, and product managers to develop scalable solutions that enhance the efficiency, reliability, and performance of our AI systems.

 

  • Design and develop scalable AI platforms to support various machine learning and deep learning models.
  • Collaborate with data scientists to understand model requirements and optimize infrastructure accordingly.
  • Implement and maintain data pipelines, model deployment strategies, and monitoring systems.
  • Ensure high availability and reliability of AI services through effective troubleshooting and performance tuning.
  • Develop and maintain systems primarily using Python, R, or Java, focusing on building REST APIs and small/secure web front ends.
  • Design and implement database schemas and queries for relational, NoSQL and Graph databases 
  • Integrate with third-party services, including Kafka, Entra ID,  
  • Collaborate with the frontend team to integrate frontend and backend systems seamlessly 
  • Ensure the scalability, reliability, and security of backend systems 
  • Troubleshoot and debug issues as they arise 
  • Write optimized code and provide innovative solutions to complex problems 
  • Stay updated with industry trends and emerging technologies
  • Stay current with the latest advancements in AI and machine learning technologies and incorporate them into the platform.
  • Automate repetitive tasks and improve platform efficiency through scripting and configuration management tools.
  • Provide technical guidance and support to other teams regarding AI platform usage and best practices.
  • Document platform architecture, workflows, and procedures for future reference and scalability.

Qualifications

  • 10+ years progressive engineering experience developing/engineering solutions and leading multiple large-scale data-domain or cross-domain engineering initiatives.
  • Proven experience in building and maintaining AI or machine learning platforms.
  • Strong knowledge of programming languages such as Python, R, and/or Java.
  • Experience with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Experience in optimizing code and Infrastructure for large data set manipulation
  • Excellent knowledge of MySQL, SQL Server, PostgreSQL, MongoDB, and/or CosmosDB
  • Excellent knowledge of modern Vector and Graph databases
  • Excellent knowledge of Redis cache or other similar solutions
  • Experience with version control systems like Git and GitLab and Agile development methodologies
  • Excellent understanding NLP, and AI/ML design and implementation concepts 
  • Excellent understanding of REST APIs, API Gateways and SPAs
  • Excellent problem-solving skills and ability to work in a fast-paced environment.
  • Strong communication and collaboration skills.
  • Excellent knowledge of data storage solutions and database management systems.
  • Proficiency in object-relational mapping (ORM), advanced algorithms, data structures, object-oriented and functional design principles, and best-practice patterns.
  • Experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn
  • Multiple years working with structured and unstructured data within a cloud-based data environment (Redshift, Data Bricks, Hadoop, Snowflake, Spark, unmanaged Kafka, etc.).
  • Experience with Master Data Management (MDM) systems and processes.
  • Experience with Data Loss Prevention (DLP), encryption/redaction strategies, etc. for protecting PII/sensitive data.
  • Good understanding of Processor architecture and ability to work with providers to map use cases to appropriate platforms
  • Bachelors Degree in Computer Science, Information Technology, Information Systems, or a related discipline. Equivalent experience and/or alternative qualifications will be considered.
  • 10%-20% of Travel Required.

 Preferred Qualifications

  • Advanced degree in computer science, data science, AI, or related field.
  • Experience with big data technologies (Hadoop, Spark).
  • Experience with reinforcement learning, generative AI, or autonomous systems.
  • Knowledge of MLOps practices and tools.
  • Knowledge of insurance-specific AI applications.
  • Hands-on experience working with AI/ML as a Data Scientist.
  • Experience in deploying machine learning models in production environments.
  • Understanding of distributed systems and microservices architecture.

The Company is an equal employment opportunity employer. 

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed

Connect With Us!

Not ready to apply? Connect with us for general consideration.