Loading...
「ツール」は右上に移動しました。
利用したサーバー: natural-voltaic-titanium
121いいね 6111回再生

Running Local AI Copilot with Visual Studio Code and CodeGPT Plugin and Llama3

If you've been using CodePilot with GitHub, you might have noticed that you're sending all your data to Microsoft. Would it be nice if you could run your model directly on your system, using only data from your own machine? Well, with Visual Studio Code and the CodeGPT plugin, you can do just that.

Just make sure you've installed ollama on your machine. You can download it from ollama.com, then install. After it's installed, open a terminal window and type in 'ollama pull llama3:8b' to get the latest Metas Llama model downloaded from Meta.

You also want to install instruct. Next, make sure your Visual Studio Code is up to date. CodeGPT is a chat AI that lets you select your own models and run them just like AI Copilot.

You can find CodeGPT by clicking 'extensions' on your left panel then typing in 'CodeGPT'. Download and install the most popular one, with over 1.1 million downloads. After it's installed, click the gear icon, go to extension settings and select ollama under CodeGPT. Remember to enable CodePilot, CodeGPT Copilot. You'll want to run 'ollama pull llama3:instruct'.

There's one last configuration to do. Open the CodeGPT chat window and click 'providers' at the top. Choose ollama from the list and pick 'llama3-8billion' as the model you'll be running.

And that's it! You've now got your Copilot installed. Use it to type anything, like a basic hello world endpoint for your Next.js app.

コメント