转换和运行自定义神经网络

Note注意

In addition to the prerequisites for flashing you also need a host with docker installed, an ethernet cable and a local network with DHCP.
除了 映像烧录 的先决条件外, 您还需要一台安装有docker的主机、一根以太网线以及一个支持DHCP的本地网络。

To convert your own AI model and run it on an evaluation kit use the following steps:
要转换您自己的人工智能模型并在评估套件上运行,请使用以下步骤:

  1. Install a pre-built image and connect to the board as described here
    安装一个预构建的镜像,并按照 这里 所述连接到主板。

  2. Connect the board to a local network using the ethernet cable
    使用以太网线将板卡连接到本地网络。

  3. Download 1.tflite from Kaggle and save it as mymodel.tflite.
    Kaggle 下载1.tflite 并保存为mymodel.tflite

  4. Download the docker image of the SyNAP toolkit on the host:
    在主机上下载SyNAP工具包的docker镜像:

    $ docker pull ghcr.io/synaptics-synap/toolkit:3.0.0
    
  5. Install an alias in the shell of the host to run the SyNAP Toolkit container:
    在主机的shell中安装alias以运行SyNAP Toolkit容器:

    $ alias synap='docker run -i --rm -u $(id -u):$(id -g) -v ${MOUNTPATH}:${MOUNTPATH} \
                   -w $(pwd) ghcr.io/synaptics-synap/toolkit:3.0.0'
    

    where ${MOUNTPATH} is the absolute path of the directory of the host to mount inside the container. This command can be executed in any directory and will be valid for the current session. You can add it to your shell startup file (e.g. .bashrc or .zshrc).
    其中${MOUNTPATH}是要挂载在容器内的主机目录的绝对路径。 该命令可在任何目录中执行,并且在当前会话中有效。 您可将它添加到您的shell启动文件(例如.bashrc.zshrc)。

    You can get help on the available toolkit commands by running it without parameters:
    如果不带参数地运行它,您可获得那些可用的工具包命令的帮助信息:

    $ synap
    
  6. Convert the model with the following command:
    使用以下命令进行模型转换:

    $ synap convert --model mymodel.tflite --target ${CHIP_NAME} --out-dir converted
    

    where ${CHIP_NAME} is either SL1620, SL1640 or SL1680 depending on the target device.
    其中${CHIP_NAME}SL1620SL1640SL1680之一,具体取决于目标设备。

    This command converts mymodel.tflite to converted/model.synap, the model converted for execution on the evaluation kit.
    此命令将mymodel.tflite转换为converted/model.synap, 即转换为可在评估套件上执行的模型。

  7. Find the ip address of the board with the following command on the target:
    在目标板上,可使用以下命令来查询它的IP地址:

    # ifconfig eth0 | grep "/inet addr/"
              inet addr:192.168.1.110  Bcast:192.168.1.255  Mask:255.255.255.0
    
  8. Upload the converted model to the board by running the following command on the host:
    在主机上运行以下命令,可将转换好的模型上传到板卡:

    $  scp converted-model/model.synap root@192.168.1.110:/tmp
    
  9. Connect to the board and issue the following command:
    连接板卡,并发出以下命令:

    # cd /tmp
    # synap_cli random
    Flush/invalidate: yes
    Loop period (ms): 0
    Network inputs: 1
    Network outputs: 1
    Input buffer: input size: 150528 : random
    Output buffer: output size: 1001
    
    Predict #0: 12.49 ms
    
    Inference timings (ms):  load: 30.72  init: 3.35  min: 12.49  median: 12.49  max: 12.49  stddev: 0.00  mean: 12.49
    

To learn more about model conversion options, more model testing tools and how to use the model in your own application refer to Machine Learning with SyNAP.
更多关于模型转换选项、 更多模型测试工具,以及如何在自己的应用程序中使用模型的信息,请参阅 使用SyNAP进行机器学习