Career Paths for AI Research Scientists: My Journey and Insights
Introduction
Hi guys! Welcome to my blogâIâm stoked to have you here. Iâm currently rocking the roles of Senior Research Scientist at MARS (Motor AI Recognition Solution) and Postdoctoral Fellow at Chulalongkorn University.
Teerapong Panboonyuen (āļāļĩāļĢāļāļāļĻāđ āļāļēāļāļāļļāļāļĒāļ·āļ)
but you can call me Kao (āđāļāđāļē).
In this space, Iâm excited to share the highs and lows of my AI journey, how I juggle between academic and industry work, and the coolest trends shaking up the AI world. Stick around and dive into the world of AI with me!
Wrote about generative AI trends and practical applications. https://t.co/SphjkqXjNk
— Kao Panboonyuen (@kaopanboonyuen) August 3, 2024
Here is what ChatGPT suggested as a fun tweet for the blog:
ð Explore the future of Generative AI!
ðĪ Uncover the latest trends and see how AI is revolutionizing various industries.
My Journey into AI Research
I got into AI back when I was doing my Masterâs at Chulalongkorn University. The challenges and possibilities in AI were just too exciting to ignore. By 24, I had my Masterâs under my belt, and by 27, I was rocking a Ph.D. Since then, Iâve been diving deep into AI research, especially in areas like Remote Sensing and Computer Vision. Iâm all about the hardcore math behind AIâlike optimization and statistical learning. My big goal? Using AI to solve real-world problems and make the world a better place. If you want to see what Iâm working on, check out my profile here: kaopanboonyuen.github.io.
Exploring the Life of an AI Research Scientist
In the world of AI research, every day is a blend of cutting-edge exploration and meticulous analysis. As an AI research scientist, your life revolves around decoding complex algorithms, fine-tuning models, and pushing the boundaries of what artificial intelligence can achieve. The journey typically involves diving into vast datasets, developing and experimenting with sophisticated neural networks, and translating theoretical concepts into practical, real-world applications. The thrill of seeing a new model perform exceptionally well or uncovering a novel insight drives the passion in this field. Collaboration with peers and staying abreast of the latest advancements is crucial, making continuous learning an integral part of the job.
Transforming Research with Gemini and Modern LLMs
The landscape of AI research is undergoing a significant transformation with the advent of advanced large language models (LLMs) like Gemini. These cutting-edge tools are revolutionizing how researchers approach their work, enabling more efficient data processing and deeper insights. Geminiâs innovative architecture offers enhanced capabilities in understanding and generating human-like text, which streamlines the development of sophisticated AI systems. By leveraging LLMs, researchers can automate complex tasks, accelerate experimentation, and uncover patterns that were previously challenging to detect. This paradigm shift not only boosts productivity but also opens new avenues for exploration, setting the stage for groundbreaking advancements in artificial intelligence.
Right now, Iâm diving deep as a Postdoctoral Fellow in AI research, a role Iâve embraced from the age of 27 to now, at 31. My journey involves crafting next-gen algorithms in Pattern Recognition, Optimization Theory, and Statistical Learning. At MARS, Iâm on the front lines, applying AI to tackle real-world challenges, especially in the auto insurance sector.
Curious to know more about my work and adventures? Check out my profile here: kaopanboonyuen.github.io.
Balancing Academia and Industry
Why do I juggle both academic and industrial roles? The answer lies in the different kinds of excitement each provides. In academia, I’m drawn to the elegance and complexity of theoretical workâunderstanding AI at its core and pushing its boundaries. On the other hand, the industrial side offers the thrill of seeing AI solutions deployed in real-world applications, making a tangible impact.
Key Qualities for Ideal AI Agents
The ideal characteristics (Fig. 2) envisioned for AI agents are numerous, each presenting its own significant research challenge before even considering the automatic acquisition of these traits:
- Learning to learn: The ability to enhance its learning process over time [2]â[8].
- Lifelong learning: Engaging in continual and incremental learning throughout its existence [9]â[13].
- Gradual knowledge and skill accumulation: Building up knowledge and abilities progressively, layer by layer.
- Reuse of learned knowledge: Applying previously acquired skills to discover and learn new ones, incorporating both forward and backward knowledge transfer [10].
- Open-ended exploration: The capability to explore without predefined boundaries [14], [15] and to set its own self-invented goals for learning [16]â[20].
- Out-of-distribution generalization: Extending its learning capabilities to new and previously unseen problems [21]â[24] and making logical extrapolations beyond its initial training data [25], [26].
Fig. 1. TA Badger agent is trained with bi-level optimization, involving two loops: the outer loop, which focuses on lifelong learning and other requirements, and the inner loop, where the agent undergoes extensive training on various curricula to develop skills approaching human-level proficiency. Goodai-Research-Roadmap
Fig. 2. I had the chance to dive into "Career Paths for AI Research Scientists: My Journey and Insights" during a talk at Sirindhorn Science Home (SSH). It was a great opportunity to share my experiences and offer some tips on navigating the exciting world of AI research. Sirindhorn Science Home (SSH)
There are various strategies to develop agents with these properties. At GoodAI, they have converged on foundational principles such as the modularity of agents, a shared policy across modules with varying internal states, and a blend of meta-learning in the outer loop followed by open-ended learning in the inner loop. These principles are central to their Badger architectures and will be discussed further in the section “Towards Implementation.” It is essential to highlight that these desired properties should manifest during the agent’s operational phase, specifically in the inner loop (the agentâs lifetime). They often utilize a meta-learning approach, which involves a bi-level optimization process where optimization occurs at two levels [4], [27], [28]. This meta-learning framework is considered the default setting throughout this discussion unless otherwise noted.
The Cool Factor in Research
One of the key motivators for any researcher is the “cool factor”âthat sense of excitement when working on something groundbreaking. For me, that thrill comes from applying AI to satellite imagery for Land Use and Land Cover (LULC) analysis in agriculture. The very idea of using AI to derive insights from images captured from space is inherently fascinating.
Understanding the Three Types of Artificial Intelligence
For those pursuing a career as AI research scientists, it’s essential to understand the different categories of AI based on their capabilities:
-
Narrow AI (Weak AI or ANI): Narrow AI is specialized in performing specific tasks. It is designed with a narrow focus and cannot operate outside its pre-defined capabilities. Research in this area involves developing and fine-tuning algorithms to perform specialized tasks efficiently, such as facial recognition, language translation, and recommendation systems. Career opportunities here include roles like AI specialist, data scientist, and machine learning engineer.
-
General AI (Strong AI or AGI): General AI aims to mirror human cognitive abilities, enabling it to understand, learn, and apply knowledge across a wide range of tasks. Working in this field requires a deep understanding of various AI and machine learning techniques, and researchers often focus on creating systems that can think and reason like humans. Careers in this area might involve research positions in advanced AI labs, academia, or tech companies that are pioneering AGI development.
-
Artificial Superintelligence (ASI): ASI represents the pinnacle of AI development, where machines would surpass human intelligence across all domains. Research here is still theoretical but involves exploring concepts that could eventually lead to machines with superior cognitive abilities. Professionals focusing on ASI are usually involved in speculative research, ethical considerations, and futuristic technology development. Career paths might include roles as AI ethicists, theoretical AI researchers, or innovators at cutting-edge research institutions.
Understanding these AI types (Fig. 2) can guide aspiring AI researchers in choosing the right focus area for their careers, whether it’s enhancing specialized AI applications or contributing to the quest for creating truly intelligent machines.
Fig. 2. Types of Artificial Intelligence (Image source: viso.ai, viso.ai/artificial-intelligence-types)
Roadmap to Learn AI
Embark on a structured journey to master Artificial Intelligence with this comprehensive roadmap. Begin with foundational mathematics, including linear algebra, calculus, and statistics, essential for understanding AI concepts. Gain proficiency in tools like Python and PyTorch, and dive into machine learning by writing algorithms from scratch, competing in challenges, and deploying models. Expand your skills in deep learning through practical applications and competitive projects, and explore advanced topics like large language models. Stay updated with the latest trends and resources to ensure continuous learning and growth in the field of AI.
Mathematics
- Linear Algebra: Learn the fundamentals of linear algebra, crucial for understanding data manipulation and algorithmic operations. For a comprehensive introduction, refer to 3Blue1Brownâs Essence of Linear Algebra and Introduction to Linear Algebra for Applied Machine Learning with Python. Dive deeper with Imperial College Londonâs lectures on Linear Algebra.
- Calculus: Explore how calculus enables optimization in machine learning, crucial for learning algorithms and adjusting models. Key resources include 3Blue1Brownâs Essence of Calculus and MIT OpenCourseWareâs Calculus Courses.
- Probability and Statistics: Understand the role of probability and statistics in making predictions and decisions under uncertainty. Useful resources are StatQuestâs Statistics Fundamentals and the book Mathematics for Machine Learning.
Tools
- Python: Begin with practical Python programming using Practical Python Programming and advance to Advanced Python Mastery. For deeper insights, explore David Beazleyâs courses.
- PyTorch: Learn PyTorch with PyTorch Tutorials by Aladdin Persson and use resources like the official PyTorch tutorials and Programming PyTorch for Deep Learning.
Machine Learning
- Write from Scratch: Practice building algorithms from scratch with repositories such as ML-From-Scratch and homemade-machine-learning. For a more in-depth challenge, try MiniTorch: A DIY Course on Machine Learning Engineering.
- Compete: Apply your skills in machine learning competitions on platforms like Kaggle and bitgrit. Study past winning solutions to enhance your learning.
- Do Side Projects: Start side projects using datasets from sources like NASA Earth data and create user interfaces with Streamlit. Refer to Getting Machine Learning to Production for practical insights.
- Deploy Them: Gain experience in deploying models and managing their lifecycle with resources like Made With ML and Evidently AI. Learn about tracking experiments and monitoring model performance with DataTalksClubâs MLOps Zoomcamp.
- Supplementary: Explore additional materials such as Machine Learning with PyTorch and Scikit-Learn and Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning.
Deep Learning
- Fast.ai: Engage with Fast.aiâs courses for a top-down approach to deep learning. Explore further with Full Stack Deep Learning for a comprehensive view.
- Do More Competitions: Participate in advanced competitions like PlantTraits2024 to apply deep learning techniques.
- Implement Papers: Study and implement research from resources like labml.ai and Papers with Code.
- Computer Vision: Delve into CS231n: Deep Learning for Computer Vision for an in-depth understanding of computer vision applications.
- NLP: Learn from Stanford’s CS 224N: Natural Language Processing with Deep Learning and Hugging Faceâs NLP Course.
Large Language Models
- Watch Neural Networks: Zero to Hero: Get a comprehensive overview of large language models with Andrej Karpathyâs Neural Networks: Zero to Hero.
- Free LLM Boot Camp: Explore free boot camps on LLMs, such as Full Stack Deep Learningâs LLM Bootcamp.
- Build with LLMs: Develop LLM applications using Building LLM Applications for Production and OpenAI Cookbook.
- Participate in Hackathons: Join AI hackathons on lablab.ai and connect with other participants.
- Read Papers: Stay updated with LLM research from Sebastian Raschkaâs articles and Papers with Code.
- Write Transformers from Scratch: Follow guides to build transformers from scratch, such as The Transformer Family Version 2.0 | LilâLog.
- Some Good Blogs: Read insightful blogs like Gradient Descent into Madness and The Illustrated Transformer.
- Watch Umar Jamil: View detailed explanations and coding tutorials by Umar Jamil.
- Learn How to Run Open-Source Models: Get practical experience with open-source LLMs using ollama.
- Prompt Engineering: Study techniques for effective prompt engineering with resources like Prompt Engineering | LilâLog.
- Fine-Tuning LLMs: Explore guides on fine-tuning models with Hugging Faceâs fine-tuning guide and Fine-Tuning â The GenAI Guidebook.
- RAG: Learn about Retrieval-Augmented Generation with articles such as Building RAG-based LLM Applications for Production.
How to Stay Updated
- Regularly engage with leading blogs, research papers, and online courses to remain current with the latest advancements in AI and machine learning.
Other Curriculums/Listicles You May Find Useful
- Explore additional curriculums and listicles for a broader understanding of AI topics, available through various educational and professional resources.
Highlighted Publications
Throughout my career, I’ve had the privilege to contribute to several exciting research projects. Below are some of my notable publications, each representing a unique challenge and innovative solution:
-
MARS Mask Attention Refinement with Sequential Quadtree Nodes for Car Damage Instance Segmentation
Published in: ICIAP 2023 Workshops, Lecture Notes in Computer Science, Springer, Cham
This paper introduces a novel approach for car damage detection using Mask Attention Refinement with sequential quadtree nodes, specifically designed to enhance accuracy in the segmentation of damaged areas on vehicles. -
MeViT: A Medium-Resolution Vision Transformer for Semantic Segmentation on Landsat Satellite Imagery for Agriculture in Thailand
Published in: Remote Sensing, 2023
MeViT is a Vision Transformer-based model that processes medium-resolution satellite images to classify different types of land cover in agricultural areas. This research has significant implications for monitoring and managing agricultural resources. -
Object Detection of Road Assets Using Transformer-Based YOLOX with Feature Pyramid Decoder on Thai Highway Panorama
Published in: Information, 2022
This paper explores an innovative method for detecting road assets, such as traffic signs and barriers, using a Transformer-based YOLOX model. The approach significantly improves the accuracy and reliability of object detection in complex environments. -
Transformer-Based Decoder Designs for Semantic Segmentation on Remotely Sensed Images
Published in: Remote Sensing, 2021
Here, we investigate the use of Transformer-based architectures for segmenting high-resolution remote sensing images. This work pushes the boundaries of traditional convolutional neural networks by leveraging the power of self-attention mechanisms. -
Semantic Labeling in Remote Sensing Corpora Using Feature Fusion-Based Enhanced Global Convolutional Network
Published in: Remote Sensing, 2020
This publication introduces a feature fusion approach for semantic labeling tasks, combining multiple feature maps to improve the accuracy of land cover classification in remote sensing imagery.
Key Trends in AI Research
The field of AI is constantly evolving, with several exciting trends emerging. Here’s a look at some of the most promising areas:
-
Generative AI: With models like GANs and diffusion models, generative AI is revolutionizing how we create content, from art and music to realistic simulations.
-
Self-Supervised Learning: This approach is gaining traction as it reduces the need for labeled data, making it easier to train AI models on vast datasets.
-
AI for Social Good: Applications of AI in healthcare, environmental monitoring, and disaster response highlight the technology’s potential to solve some of humanity’s biggest challenges.
-
Explainable AI (XAI): As AI systems become more complex, the need for transparency and interpretability is critical. XAI focuses on making AI decisions understandable to humans.
-
AI Security and Ethics: With the growing deployment of AI, addressing ethical considerations and ensuring AI security are more important than ever.
Inspiration for Aspiring Researchers
For those considering a career in AI research, my advice is simple: find a topic that excites you. Choose projects that you find inherently cool. This passion will sustain you through the challenges of research. Start by exploring current literature to understand what has already been done and identify gaps. Decide whether to build on existing models or innovate from scratch. Focus on how you can improve accuracy, speed, or applicability of AI solutions.
Remember, research is a journey, not a destination. Be curious, be patient, and never stop learning. The most rewarding part of research is not just the recognition that comes from publishing a paper but seeing your work make a real-world impact. Whether it’s through advancing technology or improving lives, your contribution as a researcher can make a difference.
Before I Go: Hereâs Some Exciting News!
Iâm thrilled to announce that Iâve been awarded a prestigious scholarship by Her Royal Highness Princess Maha Chakri Sirindhorn to attend the Global Young Scientists Summit (GYSS) (Fig. 3) in Singapore from January 6-10, 2025. This recognition is a major boost for my passion and drive to push the envelope in innovation!
Fig. 3. I am excited to announce that I have been awarded a prestigious scholarship by Her Royal Highness Princess Maha Chakri Sirindhorn to attend the Global Young Scientists Summit (GYSS) in Singapore from January 6-10, 2025. This esteemed recognition greatly fuels my passion and determination to drive forward innovation! (Facebook) Global Young Scientists Summit
āļŠāļĄāđāļāđāļāļāļĢāļ°āļāļāļīāļĐāļāļēāļāļīāļĢāļēāļāđāļāđāļē āļāļĢāļĄāļŠāļĄāđāļāđāļāļāļĢāļ°āđāļāļāļĢāļąāļāļāļĢāļēāļāļŠāļļāļāļē āļŊ āļŠāļĒāļēāļĄāļāļĢāļĄāļĢāļēāļāļāļļāļĄāļēāļĢāļĩ āļāļĢāļāļĄāļĩāļāļĢāļ°āļĢāļēāļāļ§āļīāļāļīāļāļāļąāļĒāļāļąāļāđāļĨāļ·āļāļāļāļđāđāđāļāļāļāļĢāļ°āđāļāļĻāđāļāļĒāļāļĩāđāļāļ°āđāļāđāļēāļĢāđāļ§āļĄāđāļāļāļēāļĢāļāļĢāļ°āļāļļāļĄ Global Young Scientists Summit (GYSS) āļāļĢāļ°āļāļģāļāļĩ 2568https://t.co/APrbWBQynK#ChulaEngineering #āļ§āļīāļĻāļ§āļāļļāļŽāļē #Chula pic.twitter.com/UpVqWCvHBo
— ChulaEngineering_Official (@cueng_official) August 30, 2024
The Global Young Scientists Summit (GYSS) is a dynamic annual event that brings together exceptional young researchers and leading scientific minds from around the world. Held in Singapore, this summit is a unique platform for discussing groundbreaking research and exploring how it can address major global challenges.
With a strong emphasis on innovation and collaboration, GYSS is where future scientific leaders converge to share ideas and shape the future of research. To dive deeper into this inspiring event, visit GYSS and join the conversation using #GYSS!
Just a heads upâonce I wrap up at GYSS, I’ll be crafting a new blog to share all the awesome experiences with you. Stay tuned!
Conclusion
Being part of the AI revolution is a unique privilege. It’s a field where theoretical elegance meets real-world impact, offering endless opportunities for those willing to explore. Whether you are inclined toward academia or industry, or like me, both, there is a place for you in AI research. Let’s continue to push the boundaries and contribute to a future where AI plays a positive and transformative role in our lives.
Thank you for reading! I look forward to hearing your thoughts and engaging in discussions about AI research and career paths.
Citation
Panboonyuen, Teerapong. (Sep 2024). Career Paths for AI Research Scientists: My Journey and Insights. Blog post on Kao Panboonyuen. https://kaopanboonyuen.github.io/blog/2024-09-01-career-paths-for-ai-research-scientist/
Or
@article{panboonyuen2024careerpaths,
title = "Career Paths for AI Research Scientists: My Journey and Insights.",
author = "Panboonyuen, Teerapong",
journal = "kaopanboonyuen.github.io/",
year = "2024",
month = "Sep",
url = "https://kaopanboonyuen.github.io/blog/2024-09-01-career-paths-for-ai-research-scientist/"
}
References
- https://www.upwork.com/resources/how-to-become-an-ai-research-scientist/
- https://varthana.com/student/skills-required-to-get-a-job-in-the-artificial-intelligence-industry/
- https://www.goodai.com/goodai-research-roadmap-2021-2022/
- https://medium.com/bitgrit-data-science-publication/a-roadmap-to-learn-ai-in-2024-cc30c6aa6e16/
- https://viso.ai/deep-learning/artificial-intelligence-types/