occlum/demos/tensorflow_lite
LI Qing 1304f5388d Improve Occlum GCC toolchain with new wrappers for binaries
This commit makes the toolchain easier to use in two folds:
1. When compiling C/C++ source files, no need to add "-fPIC -pie" flags manually;
2. When running executables generated by the Occlum toolchain on Linux, no
need to set the `LD_LIBRARY_PATH` manually.
2019-11-29 11:20:00 +00:00
..
patch Improve Occlum GCC toolchain with new wrappers for binaries 2019-11-29 11:20:00 +00:00
.gitignore Polish the demos 2019-10-19 02:04:13 +00:00
download_and_build_tflite.sh Polish the demos 2019-10-19 02:04:13 +00:00
README.md Polish the demos 2019-10-19 02:04:13 +00:00
run_tflite_in_linux.sh Improve Occlum GCC toolchain with new wrappers for binaries 2019-11-29 11:20:00 +00:00
run_tflite_in_occlum.sh Polish the demos 2019-10-19 02:04:13 +00:00

Use Tensorflow Lite with Occlum

This project demonstrates how Occlum enables Tensorflow Lite in SGX enclaves.

Step 1: Download Tensorflow, build Tensorflow Lite, and download models

./download_and_build_tflite.sh

When completed, the resulting Tensorflow can be found in tensorflow_src directory, the Tensorflow Lite Model can be found in models directory

Step 2.1: To run TensorFlow Lite inference demo in Occlum, run

./run_tflite_in_occlum.sh demo

Step 2.2: To run TensorFlow Lite inference benchmark in Occlum, run

./run_tflite_in_occlum.sh benchmark

Step 3.1 (Optional): To run TensorFlow Lite inference demo in Linux, run

./run_tflite_in_linux.sh demo

Step 3.2 (Optional): To run TensorFlow Lite inference benchmark in Linux, run

./run_tflite_in_linux.sh benchmark