Join Tether and Shape the Future of Digital FinanceAt Tether, we're pioneering a global financial revolution with innovative blockchain solutions that enable seamless, secure, and transparent digital transactions worldwide.About TetherOur product suite includes the trusted stablecoin USDT, energy-efficient Bitcoin mining solutions, advanced data sharing applications like KEET, and educational initiatives to empower individuals in the digital economy.Why Join Us?
Work remotely with a global team of talented professionals dedicated to innovation in fintech.
If you have excellent English communication skills and a passion for technology, Tether offers a dynamic environment to grow and make an impact.About the JobWe are seeking an experienced AI Model Evaluation Specialist to develop and implement rigorous evaluation frameworks across the AI lifecycle, focusing on model responsiveness, efficiency, and reliability in real-world applications.
Responsibilities include designing metrics, benchmarking, collaborating with cross-functional teams, and improving evaluation pipelines.ResponsibilitiesDevelop and deploy evaluation frameworks assessing models during pre-training, post-training, and inference, tracking metrics like accuracy, latency, throughput, and memory usage.Create high-quality evaluation datasets and benchmarks to ensure consistent measurement of model robustness and improvements.Collaborate with product, engineering, and data science teams to align evaluation strategies with business goals, presenting insights via dashboards and reports.Analyze evaluation data to identify bottlenecks, optimize model performance, and ensure resource-efficient deployment.Refine evaluation methodologies through empirical research, staying updated on emerging techniques to enhance model reliability and value.Minimum QualificationsDegree in Computer Science or related field; PhD in NLP, Machine Learning, or similar is preferred, with a strong publication record.Proven experience in designing and evaluating AI models at multiple lifecycle stages, with expertise in evaluation metrics and frameworks.Strong programming skills and experience building scalable evaluation pipelines, familiar with performance metrics like latency, throughput, and memory footprint.Ability to conduct iterative experiments, stay abreast of emerging trends, and refine evaluation practices continuously.Experience collaborating with cross-functional teams and translating technical insights into actionable recommendations.
#J-18808-Ljbffr
Ai Engineer • São José dos Campos, São Paulo, Brasil