close
Setup Intel OpenVINO and AWS Greengrass on Ubuntu 
  1. First set the conversion tool ModelOptimizer: https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
  2. Command : `source /bin/setupvars.sh`
  3. Command : `cd /deployment_tools/model_optimizer/install_prerequisites`
  4. Command : `sudo -E ./install_prerequisites.sh`
  5. Model Optimizer uses Python 3.5, whereas Greengrass samples use Python 2.7. In order for Model Optimizer not to influence the global Python configuration, activate a virtual environment as below:
  6. Command : `sudo ./install_prerequisites.sh venv`
  7. Command : `cd /deployment_tools/model_optimizer`
  8. Command : `source venv/bin/activate`
  9. For CPU, models should be converted with data type FP32 and for GPU/FPGA, it should be with data type FP16 for the best performance.
  10. For classification using BVLC Alexnet model:
    Command : `python mo.py --framework caffe --input_model /bvlc_alexnet.caffemodel --input_proto /deploy.prototxt --data_type --output_dir --input_shape [1,3,227,227]`
  11. For object detection using SqueezeNetSSD-5Class model:
    Command : `python mo.py --framework caffe --input_model /SqueezeNetSSD-5Class.caffemodel --input_proto /SqueezeNetSSD-5Class.prototxt --data_type --output_dir `
  12. where is the location where the user downloaded the models, is FP32 or FP16 depending on target device, and is the directory where the user wants to store the IR. IR contains .xml format corresponding to the network structure and .bin format corresponding to weights. This .xml should be passed to mentioned in the Configuring the Lambda Function section. In the BVLC Alexnet model, the prototxt defines the input shape with batch size 10 by default. In order to use any other batch size, the entire input shape needs to be provided as an argument to the model optimizer. For example, if you want to use batch size 1, you can provide --input_shape [1,3,227,227].
Greengrass sample is in : 
/opt/intel/computer_vision_sdk/inference_engine/samples/python_samples/greengrass_samples/

However, there are some changes in the openvino_toolkit_p_2018.3.343 version of the path that need to be modified (python2):

LD_LIBRARY_PATH : 
/opt/intel/computer_vision_sdk/opencv/share/OpenCV/3rdparty/lib:/opt/intel/computer_vision_sdk/opencv/lib:/opt/intel/opencl:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/cldnn/lib:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/model_optimizer_caffe/bin:/opt/intel/computer_vision_sdk/openvx/lib

PYTHONPATH : 

/opt/intel/computer_vision_sdk/python/python2.7/ubuntu16/

PARAM_CPU_EXTENSION_PATH : 

/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_avx2.so
arrow
arrow
    全站熱搜

    tttt 發表在 痞客邦 留言(0) 人氣()