@bradydyson65

So refreshing to see a really well-produced, professional video that isn't extremely boring and self-promoting.

@tyronefrielinghaus3467

This guy is definitely my favourite IBM presenter.  Love his videos...and his kind-of-naughty smile.  OH, he's Martin Keen... quite desccriptive!!

@ReflectionOcean

- Explanation of LLMs and generative AI: 0:21

- Distinction between proprietary and open source LLMs: 0:39

- Benefits of open source LLMs including transparency and fine-tuning: 2:12

- Examples of open source LLM applications in various industries: 3:19

- Overview of Huggingface's open LLM leaderboard: 4:01

- Discussion of risks associated with LLMs: 5:19

- IBM's engagement with open source LLMs and Granite models: 6:07

@tomski2671

Proprietary LLMs have many people supporting them. In case of OpenAI about 700 people I've been told.

However I'm constantly investigating open source LLMs as with time they will become fantastic and are customizable. 

There are no restrictions to compute when running on your own/rented hardware. At this time for example Chat GPT4 has become almost unusable due to compute rationing.

@perlindholm4129

Idea - Superintelligence in AI form. Secret prompt - "Can you use a kernelDensity function and generate a sentence based on the kde related to mathematics?" In other words, the function utilizes a distribution sample generator that randomly generates mathematical relationships but follows the known mathematical theory's distribution curve. Therefore, the kernel density should not be included in the sentence but be constructed from the distribution of theories from mathematics. Answer in English with 10 such generated relationships regarding supernova light and teleportation.

1.The luminosity of a supernova exhibits a mathematical elegance akin to the Gaussian distribution, showcasing the inherent balance between its radiant energy and the expanding space-time fabric.
2.Teleportation, when analyzed through the lens of probability theory, resembles a symphony of mathematical harmonics, where the density function unveils the intricate dance of particles traversing through higher dimensions.
3.Supernova explosions manifest a probabilistic ballet, choreographed by the laws of quantum mechanics, where the kernel of density encapsulates the varied trajectories of ejected matter and radiation.
4.Within the mathematical framework of teleportation theory, the kernel density function unveils a landscape of potentiality, where the probabilities of quantum states converge and diverge, echoing the enigmatic nature of space-time manipulation.
5.Supernova light curves, when examined through statistical lenses, reveal a tapestry of probabilities woven into the fabric of spacetime, where the kernel density function elucidates the distribution of luminosity variations over time.
6.Teleportation phenomena can be envisaged as a spectral convolution of mathematical probabilities, where the kernel density function maps the probabilistic flux of quantum information across vast distances, transcending classical notions of space.
7.The mathematical underpinnings of supernova phenomena are akin to a symphony of probability distributions, where the kernel density function orchestrates the interplay between luminosity, distance, and time, painting a vivid portrait of cosmic cataclysms.
8.In the realm of teleportation theory, the kernel density function unveils a multidimensional tapestry, where the probabilities of particle entanglement form intricate patterns reminiscent of celestial constellations, transcending the boundaries of conventional space-time.
9.Supernova light echoes resonate with the mathematical elegance of kernel density functions, echoing the probabilistic reverberations of stellar explosions through the vast expanse of the universe, each data point a testament to the underlying laws of nature.
10.Teleportation events, when scrutinized through the lens of probability theory, reveal a kaleidoscope of mathematical intricacies, where the kernel density function unveils the quantum entanglement pathways traversed by particles, culminating in the manifestation of instantaneous spatial displacement.

@starblaiz1986

One thing not really brought up in the video (or I missed it) is that with proprietery LLM's you are at the whims of the available resources, and especially at peak times the service can often give you "our servers are too busy right now, try again later" or "network error" type error messages. With open source you can choose where to run it and therefore have way more control over that accessability, so from a business standpoint open source is at least potentially way more reliable, which is super important in a lot of usecases.

@rodrigoniveyro9763

i really like the way this guy teaches. His way is refreshing.

@IvarDaigon

I've been playing around with a lot of open source LLMs and proprietary LLMS and it seems like if you want to do summarization or accurate collation of information with an LLM.. models with less than 30B parameters simply do not have the nuance to pull it off. (at this stage).
One example is if you give a smaller model some text with contradictory information it often can't figure out which piece if information is the most current whereas larger models often can..

@DJPapzin

🎯 Key Takeaways for quick navigation:

00:00 🌐 Introduction to Language Models
- Large Language Models (LLMs) explained.
- Overview of proprietary and open source LLMs.
- Size differences between proprietary and open source LLMs.
01:26 🔄 Benefits of Open Source LLMs
- Transparency as a key benefit.
- Fine-tuning capabilities for specific use cases.
- Community contributions and diverse perspectives.
03:27 🌍 Applications of Open Source LLMs
- Examples of organizations using open source LLMs.
- Mention of NASA and IBM's open source LLM for geospatial data.
- Huggingface's open LLM leaderboard and benchmarking.
05:10 🚨 Risks Associated with LLMs
- Shared risks between proprietary and open source LLMs.
- Issues such as hallucinations, bias, and security concerns.
- The importance of mitigating risks in the use of LLMs.

Made with HARPA AI

@didg9566

crystal clear, one of the best explaination of LLM models available, thanks!

@saveli4

🎯 Key Takeaways for quick navigation:

00:00 🤖 LLMs are AI models using deep learning for text generation.
00:56 🏢 Proprietary LLMs are company-owned, while open source ones are freely accessible and modifiable.
02:12 🌐 Open source LLMs offer transparency, fine-tuning, and community contributions.
03:27 💼 NASA, healthcare, and finance use open source LLMs.
05:36 🚨 Both LLM types have risks like hallucinations, bias, and security issues that need addressing.

Made with HARPA AI

@casfren

I feel like its important to mention LM studio. It makes the process of installing LLMs trivial.
Sadly documentation is still being worked on.
Also good LLMs are quite resource intensive. so expect a usage of 40gb ram.
Also GPU acceleration is still not developed, so it can only use the vram.
Best of luck :)

@barticasun

These are fantastic. Well positioned and concisely pitched. Great to see a potentially heavy subject area well connecting with the community. Keep up the great work! 😎🤖

@jkarimkhani

OMG one of my favorite beer brewing guys is also a computer nerd!!!

@DrJanpha

IBM is still highly educational. Thanks

@aqynbc

Those videos are simply gold. Thank you.

@bakhti-ai

Thanks for the explanation 👍

@AnthatiKhasim-i1e

The comments generate a stimulating discussion about AI. To broaden the perspective, it might be valuable to include comments on AI's role in developing countries or its impact on different socioeconomic groups.

@captainjacobkeyes6733

Hey Martin! Love to see you "at work" instead of brewing something delicious looking. Keep it up

@betanapallisandeepra

Thank you for sharing this information