Software Engineer, AI — Code Evaluation & Training (Remote)
Help train large-language models (LLMs) to write production-grade code across a wide range of programming languages:
- Compare & rank multiple code snippets, explaining which is best and why.
- Repair & refactor AI-generated code for correctness, efficiency, and style.
- Inject feedback (ratings, edits, test results) into the RLHF pipeline and keep it running smoothly.
- End result: the model learns to propose, critique, and improve code the way you do.
RLHF in one line
Generate code ➜ expert engineers rank, edit, and justify ➜ convert that feedback into reward signals ➜ reinforcement learning tunes the model toward code you’d actually ship.
What You’ll Need
- 4+ years of professional software engineering experience in Python
- (Constraint programming experience is a bonus, but not required)
- Strong code-review instincts—you can spot logic errors, performance traps, and security issues quickly.
- Extreme attention to detail and excellent written communication skills.
- Much of this role involves explaining why one approach is better than another. This cannot be overstated.
- You enjoy reading documentation and language specs and thrive in an asynchronous, low-oversight environment.
What You Don’t Need
- No prior RLHF (Reinforcement Learning with Human Feedback) or AI training experience.
- No deep machine learning knowledge. If you can review and critique code clearly, we’ll teach you the rest.
Tech Stack
We are looking for engineers with a strong command of Python.
Logistics
- Location: Fully remote — work from anywhere
- Compensation: From $30/hr to $70/hr, depending on location and seniority
- Hours: Minimum 15 hrs/week, up to 40 hrs/week available
- Engagement: 1099 contract
Straightforward impact, zero fluff. If this sounds like a fit, apply here!