

AI Model Fine-Tuning Engineer
Actalent
Posted Thursday, July 3, 2025
Posting ID: JP-005391423
AI Model Fine-Tuning Engineer - Phoenix, AZ
We are actively hiring AI Engineers for our growing Semiconductor Manufacturing facilities in Phoenix, AZ! As a Junior, Intermediate, or Sr. level AI Model Fine-Tuning Engineer, you will play a crucial role in enhancing the adaptability and intelligence of AI models, ensuring they align seamlessly with real-world applications. Utilizing advanced techniques, you will focus on refining model responses to be context-aware and of high quality. This is an excellent opportunity to continue cutting-edge work within both AI and Semiconductor Manufacturing - Two highly competitive and beneficial fields to work in.
Responsibilities
- Lead the fine-tuning process for large pre-trained models, ensuring they behave appropriately in diverse contexts such as instruction-following, answering questions, or performing tasks.
- Design and implement prompt engineering strategies to enhance model output accuracy, relevance, and coherence.
- Apply Reinforcement Learning from Human Feedback (RLHF) and other behavioral fine-tuning methods to improve alignment with user needs and ethical standards.
- Collaborate with data teams to integrate relevant data and continuously improve model behavior.
- Conduct model evaluations using various performance metrics, including accuracy, bias detection, and user feedback, to identify areas for improvement.
- Iterate and experiment with different fine-tuning methods to achieve optimal performance for specific use cases.
- Monitor model drift and ensure models remain consistent, reliable, and safe over time.
Essential Skills
- Bachelor's degree in Computer Science, Data Science, or a related field.
- 5+ years of experience in fine-tuning large-scale models like GPT, T5, or BERT, focusing on behavior and functionality.
- Expertise in RLHF, prompt engineering, and zero-shot learning.
- Proficiency in Python and experience with libraries for model fine-tuning such as Transformers and DeepSpeed.
- Experience with popular transformer architectures and frameworks like Hugging Face, TensorFlow, or PyTorch.
- Deep understanding of LLM behaviors, including instruction-following, task completion, and ethical considerations.
Additional Skills & Qualifications
- Experience with ethical AI and safety considerations, including bias minimization and handling adversarial inputs.
- Experience with model deployment and real-time experimentation, such as A/B testing.
- Experience levels include:
- Junior - 0 to 2 years of professional experience, with a strong AI-centered degree and internship experience.
- Intermediate - 1 to 3 years of relevant professional experience
- Senior - 3+ years of relevant professional experience
Work Environment
The Arizona fab is state-of-the-art, but still undergoing expansion and construction in some areas, which can create a transitional atmosphere. The environment offers exposure to cutting-edge semiconductor technology and complex systems, which allows engineers and technicians to stay competitive with their skills. Candidates must be willing and able to work on-site at our Phoenix, Arizona facility (relocation assistance is offered). The work environment is highly regulated, with state-of-the-art technology and facilities. Personal cell phones are not permitted on-site. The company allows for an average of 85+ hours of learning programs per employee annually, as well as a strong focus on technical training, leadership development, and cross-functional mobility.