Let's see if it could help me finish my novel
@4:12 gpt4o got the 5070ti vram wrong. It should be 16G
You should have used Deepseek V3 to do the test, because for others you are not using a reasoning model to do the tests
FYI when you do search with deepseek, try doing it with both v3 and r1, it’s a hit or miss and it might work. Hope you can do it against o3 mini, and a few others you haven’t tried, like a bunch of models from Ai studio. Also maybe you can also try to force the search in the second follow up prompt by saying something like show your sources etc just an idea
FYI, OpenAI is not really relying on Bing but competing with it, and it seems OpenAI is using some internal crawler and indexing DB, so that's why it gets outdated pretty fast. But they probably don't care much about that.
a) Clearly 4o didn't auto-enable the web search functionality. Why didn't you then force it on by using the web search icon? b) Why are you not starting new prompts in Deepseek for each question? Not only are you not comparing the same type of behavior, R1 is specifically bad at answering subsequent prompts, as Deepseek themselves mention in their paper.
please make a version with o3 mini, now that it has search too 🙏
Did all fail to include the 40 series refresh for the RTX cards? Would have been nice to point out in the comparison.
Deepseek is ao close to openai now that it makes sense to use it as a dairy driver, i find it to be faster than 01
Why you didn't used Gemini 1.5 pro with deep research it would beat all
R1 has web search capabilites now
I think Gemini took current generation of RTX GPUs to mean GPUs that are already released.
compare with 3.5 sonnet!
just letting you know that youtube's auto title translation sucks for portuguese
That is, in sum, which is the best, at least according to you.
No Perplexity?
@outeast1052