RK3399pro 安装rknn_toolkit-1.7.1 报错
平台:Ubuntu 20.04 + 2GB内存 +1GB NPU环境:
# firefly @ firefly in ~ venv
$ pip list
Package Version
---------------------------- --------
absl-py 1.4.0
astunparse 1.6.3
cachetools 5.3.0
certifi 2023.5.7
charset-normalizer 3.1.0
flatbuffers 23.3.3
gast 0.4.0
google-auth 2.17.3
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
graphviz 0.8.4
grpcio 1.54.0
h5py 3.8.0
idna 3.4
importlib-metadata 6.6.0
keras 2.11.0
libclang 16.0.0
Markdown 3.4.3
MarkupSafe 2.1.2
mxnet 1.9.0
numpy 1.21.6
oauthlib 3.2.2
<font color="#ff0000"><b>opencv-python 4.5.3.56</b></font>
opt-einsum 3.3.0
packaging 23.1
Pillow 9.5.0
pip 23.1.2
pkg_resources 0.0.0
protobuf 3.19.6
psutil 5.6.2
pyasn1 0.5.0
pyasn1-modules 0.3.0
requests 2.30.0
requests-oauthlib 1.3.1
rknn-toolkit-lite 1.7.1
rsa 4.9
ruamel.yaml 0.15.81
setuptools 67.7.2
six 1.16.0
tensorboard 2.11.2
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.11.0
tensorflow-cpu-aws 2.11.0
tensorflow-estimator 2.11.0
tensorflow-io-gcs-filesystem 0.29.0
termcolor 2.3.0
torch 1.12.0
torchvision 0.12.0
typing_extensions 4.5.0
urllib3 2.0.2
Werkzeug 2.2.3
wheel 0.40.0
wrapt 1.15.0
zipp 3.15.0
问题:根据《Rockchip_User_Guide_RKNN_Toolkit_V1.7.1_CN》和 《Rockchip_Quick_Start_RKNN_Toolkit_V1.7.1_CN》操作文档已经安装了RKNN相关依赖,环境是python 3.7 (virtual)pip3 install --default-timeout=100 rknn_toolkit-1.7.1-cp37-cp37m-linux_aarch64.whl
安装过程中就出现以下错误,大致是无法编译通过opencv-python,之前安装opencv-python 4.7版本
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
make: *** Error 1
make: *** Error 2
make: *** Error 2
Traceback (most recent call last):
File "/tmp/pip-build-env-m2qzw8o0/overlay/lib/python3.7/site-packages/skbuild/setuptools_wrap.py", line 674, in setup
cmkr.make(make_args, install_target=cmake_install_target, env=env)
File "/tmp/pip-build-env-m2qzw8o0/overlay/lib/python3.7/site-packages/skbuild/cmaker.py", line 696, in make
self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env)
File "/tmp/pip-build-env-m2qzw8o0/overlay/lib/python3.7/site-packages/skbuild/cmaker.py", line 741, in make_impl
raise SKBuildError(msg)
An error occurred while building with CMake.
Command:
/tmp/pip-build-env-m2qzw8o0/overlay/lib/python3.7/site-packages/cmake/data/bin/cmake --build . --target install --config Release --
Install target:
install
Source directory:
/tmp/pip-install-cyfstk69/opencv-python_017be744da0a466ba9f9c8d52a87d6d4
Working directory:
/tmp/pip-install-cyfstk69/opencv-python_017be744da0a466ba9f9c8d52a87d6d4/_skbuild/linux-aarch64-3.7/cmake-build
Please check the install target is valid and see CMake's output for more information.
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for opencv-python
Failed to build opencv-python
ERROR: Could not build wheels for opencv-python, which is required to install pyproject.toml-based projects
通过chat-gpt搜索答案后,重新安装opencv-python 4.5.3.56,仍然未解决!!! 而且安装过程,不管是有线网还是无线网,等待非常久。
https://dev.t-firefly.com/thread-120676-1-1.html
看这篇,如果你装lite版本就好装点,而且我也没试过这么高的OpenCV版本 本帖最后由 KevinWu 于 2023-5-10 19:02 编辑
大佬,您好!我这边RK3399pro 板子已经装好了rknn_toolkit_lite 1.7.1安装rknn_toolkit_lite 1.7.1,Ubuntu 20.04--x86平台也安装好rktoolkit 1.7.3,能把tf模型转换成RKNN模型,在板子上可以用rknn_toolkit_lite来进行推理;
但是我想测试一个python代码,作者提供的代码瑞芯微 TB-RK3399Pro --YOLOV3开发与优化攻略实现是需要在RK3399pro 板子安装rktoolkit环境瑞芯微 TB-RK3399Pro -- 开发板环境,才能调用接口from rknn.api import RKNN,所以我打算自己在RK3399pro 板子配置rknn_toolkit环境进行测试。
代码我已经上传到云盘yolov3_demo【提取码: 6ga6】可自行下载查看
895816513 发表于 2023-5-10 15:07
https://dev.t-firefly.com/thread-120676-1-1.html
看这篇,如果你装lite版本就好装点,而且我也没试过这 ...
大佬,您好!麻烦你有空帮看看我最新的回复,谢谢 板子上难装rktoolkit,我之前装了好几次没装成功过。我看那CSDN博客也没说要运行什么demo。 895816513 发表于 2023-5-11 10:50
板子上难装rktoolkit,我之前装了好几次没装成功过。我看那CSDN博客也没说要运行什么demo。
大佬,请教一下,如果不在板子上安装rknn_toolkit,怎样调用API接口from rknn.api import RKNN?
我打算把opencv-python版本降低到4.0.1.23或者4.3.0.36再试试
我看csdn博主运行的代码python3 rknn_camera_416x416.py确实要安装rknn_toolkit,否则我这边无法调用这个接口;我原本想接入相机进行目标检测
import numpy as np
import cv2
from PIL import Image
from rknn.api import RKNN
from timeit import default_timer as timer
GRID0 = 13
GRID1 = 26import numpy as np
import cv2
from PIL import Image
from rknn.api import RKNN
from timeit import default_timer as timer
GRID0 = 13
GRID1 = 26
GRID2 = 52
LISTSIZE = 85
SPAN = 3
NUM_CLS = 80
MAX_BOXES = 500
OBJ_THRESH = 0.5
NMS_THRESH = 0.6
CLASSES = ("person", "bicycle", "car","motorbike ","aeroplane ","bus ","train","truck ","boat","traffic light",
"fire hydrant","stop sign ","parking meter","bench","bird","cat","dog ","horse ","sheep","cow","elephant",
"bear","zebra ","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite",
"baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife ",
"spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza ","donut","cake","chair","sofa",
"pottedplant","bed","diningtable","toilet ","tvmonitor","laptop ","mouse ","remote ","keyboard ","cell phone","microwave ",
"oven ","toaster","sink","refrigerator ","book","clock","vase","scissors ","teddy bear ","hair drier", "toothbrush ")
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def process(input, mask, anchors):
anchors = for i in mask]
grid_h, grid_w = map(int, input.shape)
box_confidence = input[..., 4]
obj_thresh = -np.log(1/OBJ_THRESH - 1)
pos = np.where(box_confidence > obj_thresh)
input = input
box_confidence = sigmoid(input[..., 4])
box_confidence = np.expand_dims(box_confidence, axis=-1)
box_class_probs = sigmoid(input[..., 5:])
box_xy = sigmoid(input[..., :2])
box_wh = np.exp(input[..., 2:4])
for idx, val in enumerate(pos):
box_wh = box_wh * anchors]
pos0 = np.array(pos)[:, np.newaxis]
pos1 = np.array(pos)[:, np.newaxis]
grid = np.concatenate((pos1, pos0), axis=1)
box_xy += grid
box_xy /= (grid_w, grid_h)
box_wh /= (416, 416)
box_xy -= (box_wh / 2.)
box = np.concatenate((box_xy, box_wh), axis=-1)
return box, box_confidence, box_class_probs
def filter_boxes(boxes, box_confidences, box_class_probs):
"""Filter boxes with object threshold.
# Arguments
boxes: ndarray, boxes of objects.
box_confidences: ndarray, confidences of objects.
box_class_probs: ndarray, class_probs of objects.
# Returns
boxes: ndarray, filtered boxes.
classes: ndarray, classes for boxes.
scores: ndarray, scores for boxes.
"""
box_scores = box_confidences * box_class_probs
box_classes = np.argmax(box_scores, axis=-1)
box_class_scores = np.max(box_scores, axis=-1)
pos = np.where(box_class_scores >= OBJ_THRESH)
boxes = boxes
classes = box_classes
scores = box_class_scores
return boxes, classes, scores
def nms_boxes(boxes, scores):
"""Suppress non-maximal boxes.
# Arguments
boxes: ndarray, boxes of objects.
scores: ndarray, scores of objects.
# Returns
keep: ndarray, index of effective boxes.
"""
x = boxes[:, 0]
y = boxes[:, 1]
w = boxes[:, 2]
h = boxes[:, 3]
areas = w * h
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order
keep.append(i)
xx1 = np.maximum(x, x])
yy1 = np.maximum(y, y])
xx2 = np.minimum(x + w, x] + w])
yy2 = np.minimum(y + h, y] + h])
w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)
h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)
inter = w1 * h1
ovr = inter / (areas + areas] - inter)
inds = np.where(ovr <= NMS_THRESH)
order = order
keep = np.array(keep)
return keep
def yolov3_post_process(input_data):
# # yolov3
# masks = [, , ]
# anchors = [, , , , ,
# , , , ]
# yolov3-tiny
masks = [, , ]
anchors = [, , , , , , , , ]
boxes, classes, scores = [], [], []
for input,mask in zip(input_data, masks):
b, c, s = process(input, mask, anchors)
b, c, s = filter_boxes(b, c, s)
boxes.append(b)
classes.append(c)
scores.append(s)
boxes = np.concatenate(boxes)
classes = np.concatenate(classes)
scores = np.concatenate(scores)
# # Scale boxes back to original image shape.
# width, height = 416, 416 #shape, shape
# image_dims =
# boxes = boxes * image_dims
nboxes, nclasses, nscores = [], [], []
for c in set(classes):
inds = np.where(classes == c)
b = boxes
c = classes
s = scores
keep = nms_boxes(b, s)
nboxes.append(b)
nclasses.append(c)
nscores.append(s)
if not nclasses and not nscores:
return None, None, None
boxes = np.concatenate(nboxes)
classes = np.concatenate(nclasses)
scores = np.concatenate(nscores)
return boxes, classes, scores
def draw(image, boxes, scores, classes):
"""Draw the boxes on the image.
# Argument:
image: original image.
boxes: ndarray, boxes of objects.
classes: ndarray, classes of objects.
scores: ndarray, scores of objects.
all_classes: all classes name.
"""
for box, score, cl in zip(boxes, scores, classes):
x, y, w, h = box
print('class: {}, score: {}'.format(CLASSES, score))
print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(x, y, x+w, y+h))
x *= image.shape
y *= image.shape
w *= image.shape
h *= image.shape
top = max(0, np.floor(x + 0.5).astype(int))
left = max(0, np.floor(y + 0.5).astype(int))
right = min(image.shape, np.floor(x + w + 0.5).astype(int))
bottom = min(image.shape, np.floor(y + h + 0.5).astype(int))
# print('class: {}, score: {}'.format(CLASSES, score))
# print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(top, left, right, bottom))
cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)
cv2.putText(image, '{0} {1:.2f}'.format(CLASSES, score),
(top, left - 6),
cv2.FONT_HERSHEY_SIMPLEX,
0.6, (0, 0, 255), 2)
# print('class: {0}, score: {1:.2f}'.format(CLASSES, score))
# print('box coordinate x,y,w,h: {0}'.format(box))
def load_model():
rknn = RKNN()
print('-->loading model')
#rknn.load_rknn('./yolov3_tiny.rknn')
rknn.load_rknn('./yolov3_416x416.rknn')
print('loading model done')
print('--> Init runtime environment')
ret = rknn.init_runtime()
if ret != 0:
print('Init runtime environment failed')
exit(ret)
print('done')
return rknn
if __name__ == '__main__':
rknn = load_model()
font = cv2.FONT_HERSHEY_SIMPLEX;
#capture = cv2.VideoCapture("data/3.mp4")
capture = cv2.VideoCapture(0)
accum_time = 0
curr_fps = 0
prev_time = timer()
fps = "FPS: ??"
try:
while(True):
ret, frame = capture.read()
if ret == True:
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (416, 416))
testtime=timer()
out_boxes, out_boxes2, out_boxes3 = rknn.inference(inputs=)
testtime2=timer()
print("rknn use time {}", testtime2-testtime)
out_boxes = out_boxes.reshape(SPAN, LISTSIZE, GRID0, GRID0)
out_boxes2 = out_boxes2.reshape(SPAN, LISTSIZE, GRID1, GRID1)
out_boxes3 = out_boxes3.reshape(SPAN, LISTSIZE, GRID2, GRID2)
input_data = []
input_data.append(np.transpose(out_boxes, (2, 3, 0, 1)))
input_data.append(np.transpose(out_boxes2, (2, 3, 0, 1)))
input_data.append(np.transpose(out_boxes3, (2, 3, 0, 1)))
testtime=timer()
boxes, classes, scores = yolov3_post_process(input_data)
testtime2=timer()
print("process use time: {}", testtime2-testtime)
testtime=timer()
if boxes is not None:
draw(frame, boxes, scores, classes)
curr_time = timer()
exec_time = curr_time - prev_time
prev_time = curr_time
accum_time += exec_time
curr_fps += 1
if accum_time > 1:
accum_time -= 1
fps = "FPS: " + str(curr_fps)
curr_fps = 0
cv2.putText(frame, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=0.50, color=(255, 0, 0), thickness=2)
cv2.imshow("results", frame)
c = cv2.waitKey(5) & 0xff
if c == 27:
cv2.destroyAllWindows()
capture.release()
rknn.release()
break;
testtime2=timer()
print("show image use time: {}", testtime2-testtime)
except KeyboardInterrupt:
cv2.destroyAllWindows()
capture.release()
rknn.release()
SPAN = 3
NUM_CLS = 80
MAX_BOXES = 500
OBJ_THRESH = 0.5
NMS_THRESH = 0.6
CLASSES = ("person", "bicycle", "car","motorbike ","aeroplane ","bus ","train","truck ","boat","traffic light",
"fire hydrant","stop sign ","parking meter","bench","bird","cat","dog ","horse ","sheep","cow","elephant",
"bear","zebra ","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite",
"baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife ",
"spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza ","donut","cake","chair","sofa",
"pottedplant","bed","diningtable","toilet ","tvmonitor","laptop ","mouse ","remote ","keyboard ","cell phone","microwave ",
"oven ","toaster","sink","refrigerator ","book","clock","vase","scissors ","teddy bear ","hair drier", "toothbrush ")
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def process(input, mask, anchors):
anchors = for i in mask]
grid_h, grid_w = map(int, input.shape)
box_confidence = input[..., 4]
obj_thresh = -np.log(1/OBJ_THRESH - 1)
pos = np.where(box_confidence > obj_thresh)
input = input
box_confidence = sigmoid(input[..., 4])
box_confidence = np.expand_dims(box_confidence, axis=-1)
box_class_probs = sigmoid(input[..., 5:])
box_xy = sigmoid(input[..., :2])
box_wh = np.exp(input[..., 2:4])
for idx, val in enumerate(pos):
box_wh = box_wh * anchors]
pos0 = np.array(pos)[:, np.newaxis]
pos1 = np.array(pos)[:, np.newaxis]
grid = np.concatenate((pos1, pos0), axis=1)
box_xy += grid
box_xy /= (grid_w, grid_h)
box_wh /= (416, 416)
box_xy -= (box_wh / 2.)
box = np.concatenate((box_xy, box_wh), axis=-1)
return box, box_confidence, box_class_probs
def filter_boxes(boxes, box_confidences, box_class_probs):
"""Filter boxes with object threshold.
# Arguments
boxes: ndarray, boxes of objects.
box_confidences: ndarray, confidences of objects.
box_class_probs: ndarray, class_probs of objects.
# Returns
boxes: ndarray, filtered boxes.
classes: ndarray, classes for boxes.
scores: ndarray, scores for boxes.
"""
box_scores = box_confidences * box_class_probs
box_classes = np.argmax(box_scores, axis=-1)
box_class_scores = np.max(box_scores, axis=-1)
pos = np.where(box_class_scores >= OBJ_THRESH)
boxes = boxes
classes = box_classes
scores = box_class_scores
return boxes, classes, scores
def nms_boxes(boxes, scores):
"""Suppress non-maximal boxes.
# Arguments
boxes: ndarray, boxes of objects.
scores: ndarray, scores of objects.
# Returns
keep: ndarray, index of effective boxes.
"""
x = boxes[:, 0]
y = boxes[:, 1]
w = boxes[:, 2]
h = boxes[:, 3]
areas = w * h
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order
keep.append(i)
xx1 = np.maximum(x, x])
yy1 = np.maximum(y, y])
xx2 = np.minimum(x + w, x] + w])
yy2 = np.minimum(y + h, y] + h])
w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)
h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)
inter = w1 * h1
ovr = inter / (areas + areas] - inter)
inds = np.where(ovr <= NMS_THRESH)
order = order
keep = np.array(keep)
return keep
def yolov3_post_process(input_data):
# # yolov3
# masks = [, , ]
# anchors = [, , , , ,
# , , , ]
# yolov3-tiny
masks = [, , ]
anchors = [, , , , , , , , ]
boxes, classes, scores = [], [], []
for input,mask in zip(input_data, masks):
b, c, s = process(input, mask, anchors)
b, c, s = filter_boxes(b, c, s)
boxes.append(b)
classes.append(c)
scores.append(s)
boxes = np.concatenate(boxes)
classes = np.concatenate(classes)
scores = np.concatenate(scores)
# # Scale boxes back to original image shape.
# width, height = 416, 416 #shape, shape
# image_dims =
# boxes = boxes * image_dims
nboxes, nclasses, nscores = [], [], []
for c in set(classes):
inds = np.where(classes == c)
b = boxes
c = classes
s = scores
keep = nms_boxes(b, s)
nboxes.append(b)
nclasses.append(c)
nscores.append(s)
if not nclasses and not nscores:
return None, None, None
boxes = np.concatenate(nboxes)
classes = np.concatenate(nclasses)
scores = np.concatenate(nscores)
return boxes, classes, scores
def draw(image, boxes, scores, classes):
"""Draw the boxes on the image.
# Argument:
image: original image.
boxes: ndarray, boxes of objects.
classes: ndarray, classes of objects.
scores: ndarray, scores of objects.
all_classes: all classes name.
"""
for box, score, cl in zip(boxes, scores, classes):
x, y, w, h = box
print('class: {}, score: {}'.format(CLASSES, score))
print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(x, y, x+w, y+h))
x *= image.shape
y *= image.shape
w *= image.shape
h *= image.shape
top = max(0, np.floor(x + 0.5).astype(int))
left = max(0, np.floor(y + 0.5).astype(int))
right = min(image.shape, np.floor(x + w + 0.5).astype(int))
bottom = min(image.shape, np.floor(y + h + 0.5).astype(int))
# print('class: {}, score: {}'.format(CLASSES, score))
# print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(top, left, right, bottom))
cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)
cv2.putText(image, '{0} {1:.2f}'.format(CLASSES, score),
(top, left - 6),
cv2.FONT_HERSHEY_SIMPLEX,
0.6, (0, 0, 255), 2)
# print('class: {0}, score: {1:.2f}'.format(CLASSES, score))
# print('box coordinate x,y,w,h: {0}'.format(box))
def load_model():
rknn = RKNN()
print('-->loading model')
#rknn.load_rknn('./yolov3_tiny.rknn')
rknn.load_rknn('./yolov3_416x416.rknn')
print('loading model done')
print('--> Init runtime environment')
ret = rknn.init_runtime()
if ret != 0:
print('Init runtime environment failed')
exit(ret)
print('done')
return rknn
if __name__ == '__main__':
rknn = load_model()
font = cv2.FONT_HERSHEY_SIMPLEX;
#capture = cv2.VideoCapture("data/3.mp4")
capture = cv2.VideoCapture(0)
accum_time = 0
curr_fps = 0
prev_time = timer()
fps = "FPS: ??"
try:
while(True):
ret, frame = capture.read()
if ret == True:
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (416, 416))
testtime=timer()
out_boxes, out_boxes2, out_boxes3 = rknn.inference(inputs=)
testtime2=timer()
print("rknn use time {}", testtime2-testtime)
out_boxes = out_boxes.reshape(SPAN, LISTSIZE, GRID0, GRID0)
out_boxes2 = out_boxes2.reshape(SPAN, LISTSIZE, GRID1, GRID1)
out_boxes3 = out_boxes3.reshape(SPAN, LISTSIZE, GRID2, GRID2)
input_data = []
input_data.append(np.transpose(out_boxes, (2, 3, 0, 1)))
input_data.append(np.transpose(out_boxes2, (2, 3, 0, 1)))
input_data.append(np.transpose(out_boxes3, (2, 3, 0, 1)))
testtime=timer()
boxes, classes, scores = yolov3_post_process(input_data)
testtime2=timer()
print("process use time: {}", testtime2-testtime)
testtime=timer()
if boxes is not None:
draw(frame, boxes, scores, classes)
curr_time = timer()
exec_time = curr_time - prev_time
prev_time = curr_time
accum_time += exec_time
curr_fps += 1
if accum_time > 1:
accum_time -= 1
fps = "FPS: " + str(curr_fps)
curr_fps = 0
cv2.putText(frame, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=0.50, color=(255, 0, 0), thickness=2)
cv2.imshow("results", frame)
c = cv2.waitKey(5) & 0xff
if c == 27:
cv2.destroyAllWindows()
capture.release()
rknn.release()
break;
testtime2=timer()
print("show image use time: {}", testtime2-testtime)
except KeyboardInterrupt:
cv2.destroyAllWindows()
capture.release()
rknn.release()
另外,我看快速上手文档和网页操作建议基于X86安装toolkit,但是很奇怪瑞芯微官方,也提供arm板子的toolkit安装包,而且博主居然安装成功!!!
KevinWu 发表于 2023-5-11 11:09
大佬,请教一下,如果不在板子上安装rknn_toolkit,怎样调用API接口from rknn.api import RKNN?
我打 ...
以前我们也安装成功过的,虽然过程很麻烦。但是后面随着python库更新就好久没装成功过。所以后面就推荐arm安装lite版本 好的,谢谢大佬,我参考以下两个帖子,再试试
Firefly AIO-3399ProC开发板安装RKNN Toolkit 1.4.0开发环境
RK3399Pro 环境搭建和Yolov5 c++调用opencv进行RKNN模型部署和使用
本帖最后由 KevinWu 于 2023-5-12 15:57 编辑
895816513 发表于 2023-5-11 17:04
以前我们也安装成功过的,虽然过程很麻烦。但是后面随着python库更新就好久没装成功过。所以后面就推荐ar ...
大佬,在线急求!刚刚调用toolkit_lite接口,测试一下demo,出现无法初始化这个问题,怎么解决?
报错如下
$ python3 rknn_picture_416x416.py
-->loading model
loading model done
--> Init runtime environment
E Only support ntb mode on Linux_x64 aarch64. But can not find device with ntb mode.
E Catch exception when init runtime!
E Traceback (most recent call last):
File "/home/firefly/venv/lib/python3.7/site-packages/rknnlite/api/rknn_lite.py", line 145, in init_runtime
async_mode=async_mode, rknn2precompile=rknn2precompile)
File "rknnlite/api/rknn_runtime.py", line 201, in rknnlite.api.rknn_runtime.RKNNRuntime.__init__
File "rknnlite/api/rknn_runtime.py", line 637, in rknnlite.api.rknn_runtime.RKNNRuntime._connect
Exception: Init runtime environment failed!
代码如下
import platform
import cv2
import numpy as np
from rknnlite.api import RKNNLite
INPUT_SIZE = 224
def show_top5(result):
output = result.reshape(-1)
# softmax
output = np.exp(output)/sum(np.exp(output))
output_sorted = sorted(output, reverse=True)
top5_str = 'resnet18\n-----TOP 5-----\n'
for i in range(5):
value = output_sorted
index = np.where(output == value)
for j in range(len(index)):
if (i + j) >= 5:
break
if value > 0:
topi = '{}: {}\n'.format(index, value)
else:
topi = '-1: 0.0\n'
top5_str += topi
print(top5_str)
if __name__ == '__main__':
rknn_lite = RKNNLite()
# load RKNN model
print('--> Load RKNN model')
ret = rknn_lite.load_rknn('./resnet_18.rknn')
if ret != 0:
print('Load RKNN model failed')
exit(ret)
print('done')
ori_img = cv2.imread('./space_shuttle_224.jpg')
img = cv2.cvtColor(ori_img, cv2.COLOR_BGR2RGB)
# init runtime environment
print('--> Init runtime environment')
# run on RK3399Pro/RK1808 with Debian OS, do not need specify target.
if platform.machine() == 'aarch64':
target = None
else:
target = 'rk1808'
ret = rknn_lite.init_runtime(target=target)
if ret != 0:
print('Init runtime environment failed')
exit(ret)
print('done')
# Inference
print('--> Running model')
outputs = rknn_lite.inference(inputs=)
show_top5(outputs)
print('done')
rknn_lite.release()
之前还是正常的
895816513 发表于 2023-5-11 17:04
以前我们也安装成功过的,虽然过程很麻烦。但是后面随着python库更新就好久没装成功过。所以后面就推荐ar ...
大佬,您好!请问可以帮忙看看吗?我昨天重新刷了固件(AIO-RK3399PRO-JD4-UBUNTU-20.04_DESKTOP-GPT-20211230-1511.img),在python 3.7 虚拟环境按照手册教程安装好了 rknn_toolkit_lite-1.7.1-cp37-cp37m-linux_aarch64.whl,但是不知道为什么无法启动ntb 模式?
我参考网上攻略宿主机链接计算棒上1.7.1版本驱动失败,尝试在~/venv/lib/python3.7/site-packages/rknnlite/3rdparty/platform-tools/ntp/linux-aarch64 替换了最新的npu_transfer_proxy(这个是从github下载的RKNPU For RK3399Pro),并且这个路径下 sudo ./npu_transfer_proxy 启动,
npu_transfer_proxy devices 没有信息输出
环境如下:
Package Version
----------------- --------------
numpy 1.16.3
opencv-python 4.7.0.72
pip 23.1.2
pkg_resources 0.0.0
psutil 5.6.2
rknn-toolkit-lite 1.7.1
ruamel.yaml 0.15.81
setuptools 67.7.2
wheel 0.40.0
----------------- ------------------------- --------
(venv) firefly@firefly:~/wzf_ws/rknn-toolkit-lite/packages$ ffgo version
OS: Ubuntu 20.04.3 LTS
MODEL: RK3399pro-firefly-aiojd4 board
FIREFLY: v2.10-62-g087b2b2
DATE: 20211228-1443
KERNEL: Linux version 4.4.194 (jincheng@jincheng-PC) (gcc version 6.3.1 20170404 (Linaro GCC 6.3-2017.05) ) #15 SMP Thu Dec 30 14:52:16 CST 2021
页:
[1]