Firefly开源社区

打印 上一主题 下一主题

[技术讨论] AIBOX-1684X 使用快速部署 Llama3出错

43

积分

0

威望

0

贡献

技术小白

积分
43

AIBOX-1684X 使用快速部署 Llama3出错

发表于 2024-6-4 17:04:20      浏览:1223 | 回复:0        打印      只看该作者   [复制链接] 楼主
问题描述及复现步骤:
执行会出错

root@aibox-1684x:/home/bingo# ./talk_to_llama3.sh
Transformers library is already installed.
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Load ./llama3/Llama3/token_config/ ...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Device [ 0 ] loading ....
[BMRT][bmcpu_setup:406] INFO:cpu_lib 'libcpuop.so' is loaded.
bmcpu init: skip cpu_user_defined
open usercpu.so, init user_cpu_init
Model[./llama3/bmodels/llama3-8b_int4_1dev_256.bmodel] loading ....
[BMRT][load_bmodel:1084] INFO:Loading bmodel from [./llama3/bmodels/llama3-8b_int4_1dev_256.bmodel]. Thanks for your patience...
[BMRT][load_bmodel:1023] INFO:pre net num: 0, load net num: 69
Done!

      _____ _           __ _              _    ___
     |  ___(_)_ __ ___ / _| |_   _       / \  |_ _|
     | |_  | | '__/ _ \ |_| | | | |     / _ \  | |
     |  _| | | | |  __/  _| | |_| |    / ___ \ | |
     |_|   |_|_|  \___|_| |_|\__, |   /_/   \_\___|
                             |___/


User: who are you

Llama3: [bmlib_memory][error] bm_device_mem_range_valid saddr=0x19e8b60 eaddr=0x7512f4c7 out of range
./talk_to_llama3.sh: line 13:  1475 Segmentation fault      python3 ./llama3/Llama3/python_demo/pipeline.py -m ./llama3/bmodels/llama3-8b_int4_1dev_256.bmodel -t ./llama3/Llama3/token_config/ -d 0





log.txt

1.44 KB, 下载次数: 0, 下载积分: 灯泡 -1 , 经验 -1

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

友情链接 : 爱板网 电子发烧友论坛 云汉电子社区 粤ICP备14022046号-2
快速回复 返回顶部 返回列表