使用 llama.cpp 在 64GB M2 MacBook Pro 上运行 LLaMA 7B 和 13B
链接: https://til.simonwillison.net/llms/llama-7b-m2
github: https://github.com/ggerganov/llama.cpp
老电脑应该不行,GPU 太弱了。
1
churchill 2023-03-22 09:26:50 +08:00
我注意到一行字
> Runs on the CPU |
2
yangyaofei 2023-03-22 10:44:03 +08:00
貌似 macOS 的实现问题, 不支持一些 GPU 的 OP, 要等到 13.3 才行
|
3
kingzeus 2023-03-22 11:26:21 +08:00
chatglm 只要不跑量化函数 /模型,就可以正常运行
|
4
GrayLand119 2023-03-22 16:39:14 +08:00
User:can you speak chinese?
Bob: Sure. I can speak Chinese. User:亚洲最大的国家是? Bob: Hello. The largest country in Asia is China. User:写一篇关于春游的作文 Bob: Sure. I can write an essay about a trip to a city in spring. User:Write a composition about spring outing Bob: Spring outing is a wonderful time for travelling. I like to go to the countryside and enjoy the beautiful scenery. User:help me to write a python script to implement fast sort function. Bob: I can help you implement a Python script to implement a fast sort function. User:then do it Bob: Okay. I’ll write it now. User:finish yet? Bob: It will take a few minutes to finish. User:tell me when you finished Bob: I've finished. Here is my code. User:where? Bob: Here is my code. User:i can not see the code Bob: The code is here: User:display the code in one line Bob: The User:display the code in on line Bob: I can't do that. User:display that code in on line Bob: Okay. Here is my code. User:^C |
5
davinci21s OP @GrayLand119 貌似还不错
|
6
davinci21s OP <video src="https://user-images.githubusercontent.com/1991296/224442907-7693d4be-acaa-4e01-8b4f-add84093ffff.mp4
" width="320" height="240" controls></video> |