
We recently hosted a wide-ranging talk on the fundamentals of generative AI, led by our Director of Curriculum and Classroom Infrastructure Daniel Billotte. He covered topics like artificial intelligence terminology, uses, limitations, and myths, among others.
For aspiring or current software developers, this discussion offers a well-rounded look into an important technology that’s already impacting the industry – one that they can learn how to harness and use on the job to great effect.
The following is a summary of Daniel’s talk, and we encourage you to watch it in full here.
Highlights from our conversation about generative AI
Addressing Myths about AI
Daniel started the conversation by addressing a couple of common AI-related myths.
- First, he discussed the myth that AI will replace software developers. While AI is certainly changing aspects of software development — and will require adjustments from new and seasoned professionals — AI is a long way from even coming close to replacing the value of a human software engineer and all that the job requires. In fact, a recent report from the U.S. Bureau of Labor Statistics explored the increased need for engineers based on the growth of AI. Their report says that “increased demand for software developers, software quality assurance analysts, and testers will stem from the continued expansion of software development for artificial intelligence (AI), Internet of Things (IoT), robotics, and other automation applications.”
- He also addressed the myth that developers don’t need to learn programming because AI will soon do it for them. It’s important to deeply understand the foundations of coding and software engineering while also learning the foundations of generative AI. They can all work together to enhance productivity and results. In our programs, we teach students how to integrate AI tools like GitHub Copilot into their workflow after they’ve learned how to be proficient programmers without it. Students then use the tool to build a portfolio project to demonstrate their ability to evaluate and blend AI-generated code with their own. Additionally, Daniel notes that software developers do more than just code. They identify issues, fix bugs, work with teammates, brainstorm solutions, and use a variety of other tools required to ensure projects and applications work effectively and solve real-world problems.
AI Vocabulary
Artificial intelligence can be complicated, and software engineers need to understand the terms. Daniel covered important vocabulary to help viewers better understand AI in general.
He discussed how “artificial intelligence” is a broad term that can be broken down into many types, and how “machine learning” is a different type of AI in which machines can learn without being explicitly programmed, based on the data you give it. Machine learning lives within artificial intelligence, and within machine learning is a system of neural networks that are trained via feedback.
Then, within neural networks you have transformers, Large Langauge Models (LLMs), and Generative PreTrained Transformers (GPTs). Within GPTs are popular chatbots like ChatGPT, Gemini, and Claude. Other types of GPTs include code assistants like Copilot, which is an important tool for programmers (and one we teach in our coding bootcamps).
All of these terms were discussed in deeper detail during the talk.
Exploring Large Language Models (LLMs)
LLMs are important elements of artificial intelligence, and Daniel discussed what these systems can and can’t do.
For instance, he talked about how LLMs can (once properly directed) create poems, role-play as experts, analyze and summarize information, and write literature. They can even generate code, although this code needs to be verified and reviewed by professionals for accuracy and other issues.
Just as important, he discussed what LLMs cannot do. For example, they can’t provide 100% correct answers. They are not aware of their correctness (or incorrectness), and they are not trained on the most recent information. Importantly, they’re also not impartial or unbiased. After all, LLMs are made by people, who are inherently biased by their perspectives and experiences.
Daniel also had some recommendations on what you should and shouldn’t do when using a large language model. He recommended using them to find new information but reminded them about the importance of maintaining critical thinking and skepticism.
Questions
Daniel wrapped the conversation by answering thoughtful and interesting audience questions, some related to our coding bootcamps, others about using AI-generated images, and other important topics.
Watch this fascinating discussion today if you want to learn more about using generative AI in software engineering.
Ready to start learning?
Hack Reactor is ready to train you to be a software engineer with a focus on using AI tools for enhanced productivity. Get started by completing your application!