Week 4 - BALT 4364 - CH 4
Chapter four covers a topic that I think is hard for most people to understand, including myself. That topic would be deep learning, and I get caught up in trying to figure out how something that is not "alive" is able to learn. Lucky for me, this chapter was able to answer some of those questions. The computers use neural networks similar to the structure and functions of the human brain. What separates deep learning from other types of machine learning is the fact that it can learn from the data on its own without us telling it what to look for. In my opinion, that is amazing and scary at the same time.
So, what are some examples of deep learning that we interact with? Our phones have a couple of good examples. Siri uses speech recognition deep learning, while face recognition also implements this technology. The end of the chapter provides us with some ChatGPT prompts that I thought would help me make sense of some of this, but I decided to add a prompt of my own. I asked ChatGPT if deep learning is expensive to implement, and this is what it responded with:
Why it can be expensive
-
Hardware costs: Training deep learning models often requires GPUs or specialized chips, which can be costly to buy or rent.
-
Large datasets: Deep learning typically needs a lot of data, which can be expensive to collect, label, and store.
-
Energy usage: Training large models uses a lot of electricity.
-
Specialized expertise: Hiring data scientists or engineers with deep learning skills can be costly.
Why it’s not always expensive
-
Pretrained models: You can use models that are already trained (like image classifiers or language models), which drastically reduces cost.
-
Cloud services: Providers like Google, AWS, and Microsoft rent GPU time so you don’t need to buy hardware.
-
Smaller models: Many modern deep learning models are efficient and inexpensive to run.
-
Open-source tools: Frameworks like TensorFlow and PyTorch are free.
Comments
Post a Comment