We are seeking an experienced QA Manual Tester with strong knowledge of Microsoft Azure, AWS, and AI/ML-based workflows to ensure the delivery of highquality AI products and datadriven solutions. The ideal candidate will have deep experience validating AI models, data pipelines, cloudnative applications, microservices, and API-driven systems.
Start date: 6th of April
Duration: 4 months + extensions
Location: Remote
Quality Assurance & Testing
* Develop and execute test plans, test cases, and test scenarios for AI/ML applications, data pipelines, APIs, and cloud-based services.
* Perform functional, regression, integration, system, UAT, and exploratory testing.
* Validate AI model behaviour, including model inputs, outputs, prediction accuracy, edge cases, and data quality.
* Ensure quality validation of training data, inference workflows, and model deployment pipelines.
* Conduct manual API testing using tools such as Postman, Swagger, or similar.
* Log, track, and verify defects using Azure DevOps, Jira, or similar tools.
* Support testing of AWS-based workloads such as S3, Lambda, SageMaker, EC2, RDS, IAM roles, and cloud security validations.
* Conduct environment testing, configuration validation, and service-level monitoring across cloud environments.
AI/ML Project Testing
* Work closely with data scientists and ML engineers to understand model requirements, model metrics, and dataset versions.
* Test data preprocessing, model training jobs, inference endpoints, and model versioning workflows.
* Validate ML pipelines (MLOps) and CI/CD workflows for model deployment and monitoring.
* Collaborate with crossfunctional teams including AI engineers, product managers, data engineers, and DevOps teams.
* Participate in requirements reviews, sprint planning, retrospectives, and release readiness activities.
* Provide clear, concise feedback on quality risks and improvements.
#J-18808-Ljbffr