1. Responsible for data cleaning (ETL) and data warehouse construction to support large-scale AI models.
2. Responsible for training and fine-tuning large AI models to meet the requirements of specific business scenarios.
3. Responsible for developing supporting tools, such as dashboards and general business logic, to ensure the practicality of AI model applications.
4. Must have hands-on development experience and be able to lead a team or independently complete projects related to data collection and development.
1. ???????ETL???????????????
2. ??????????????????????
3. ???????????dashboad??????????????????????
4. ??????????????????????????
Requirements
????
1. A degree in computer science or a related field is preferred. Must be familiar with professional knowledge in machine learning, deep learning, and natural language processing, with at least 1 year of experience in GPT or Gemini application development, and proficient in deep learning frameworks such as PyTorch or TensorFlow.
2. Familiar with models such as Transformer, BERT, GPT, and fine-tuning algorithms like LoRA, with experience in fine-tuning models.
3. Must have Java programming experience.
4. Must have experience in data warehouse development and construction, such as using Flink and building ETL data cleaning pipelines.
5. Experience with large model pre-training and practical application in business scenarios is a plus.
6. Must have hands-on experience in setting up large models based on open-source frameworks.
7. Experience in conversational AI, marketing content generation, or machine translation is preferred.
8. Priority will be given to candidates with hands-on experience in Google Cloud Platform (GCP), particularly those with experience in BigQuery.