Join Tether and Shape the Future of Digital FinanceAt Tether, we’re not just building products;
we’re pioneering a global financial revolution. Our solutions enable seamless integration of reserve-backed tokens across blockchains, empowering businesses worldwide. Transparency and security underpin every transaction, fostering trust in our innovative digital finance ecosystem.Innovate with TetherTether Finance :
Offers the trusted USDT stablecoin and digital asset tokenization services.Additional Initiatives :
Tether Power :
Focuses on sustainable energy solutions for Bitcoin mining.Tether Data :
Develops AI and data sharing technologies like KEET.Tether Education :
Provides digital learning opportunities for individuals.Tether Evolution :
Merges technology with human potential for groundbreaking innovations.Why Join Us?Our global team works remotely, fostering innovation in fintech. If you excel in English and want to impact the future of digital finance, Tether is your platform.About the job :
As part of our AI model team, you will innovate in model serving and inference architectures, optimizing deployment strategies for high responsiveness and scalability across various systems, including resource-limited devices and complex multi-modal architectures.Responsibilities :
Design and deploy efficient model serving architectures for diverse environments.Establish performance metrics like latency, throughput, and memory usage.Conduct inference testing in simulated and real-world environments, monitoring key performance indicators.Prepare datasets and scenarios to evaluate model performance on low-resource devices.Diagnose and optimize serving pipelines for scalability and reliability.Collaborate with teams to integrate optimized frameworks into production, ensuring continuous improvement.Requirements include a degree in Computer Science or related fields, preferably a PhD in NLP or Machine Learning, with proven experience in inference optimization on mobile devices, expertise in CPU/GPU kernel development, and a strong understanding of model serving architectures.Proficiency in developing end-to-end inference pipelines and applying empirical research to overcome challenges is essential.J-18808-Ljbffr
#J-18808-Ljbffr