Add pytorch demo

This commit is contained in:
ClawSeven 2021-06-08 13:01:53 +08:00 committed by Zongmin.Gu
parent 53658e865b
commit f534017d79
6 changed files with 157 additions and 0 deletions

@ -308,6 +308,30 @@ jobs:
- name: Run Tensorflow-lite benchmark - name: Run Tensorflow-lite benchmark
run: docker exec tflite_test bash -c "cd /root/occlum/demos/tensorflow_lite && SGX_MODE=SIM ./run_tflite_in_occlum.sh benchmark" run: docker exec tflite_test bash -c "cd /root/occlum/demos/tensorflow_lite && SGX_MODE=SIM ./run_tflite_in_occlum.sh benchmark"
Pytorch_test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v1
with:
submodules: true
- name: Get occlum version
run: echo "OCCLUM_VERSION=$(grep "Version =" src/pal/include/occlum_version.h | awk '{print $4}')" >> $GITHUB_ENV
- name: Create container
run: docker run -itd --name=pytorch_test -v $GITHUB_WORKSPACE:/root/occlum occlum/occlum:${{ env.OCCLUM_VERSION }}-ubuntu18.04
- name: Build dependencies
run: docker exec pytorch_test bash -c "cd /root/occlum; make submodule"
- name: Make install
run: docker exec pytorch_test bash -c "source /opt/intel/sgxsdk/environment; cd /root/occlum; OCCLUM_RELEASE_BUILD=1 make install"
- name: Build python and pytorch
run: docker exec pytorch_test bash -c "cd /root/occlum/demos/pytorch; ./install_python_with_conda.sh"
- name: Run pytorch test
run: docker exec pytorch_test bash -c "cd /root/occlum/demos/pytorch; SGX_MODE=SIM ./run_pytorch_on_occlum.sh"
# Below tests needs test image to run faster # Below tests needs test image to run faster
Grpc_test: Grpc_test:

@ -20,6 +20,7 @@ This set of demos shows how real-world apps can be easily run inside SGX enclave
* [https_server](https_server/): A HTTPS file server based on [Mongoose Embedded Web Server Library](https://github.com/cesanta/mongoose). * [https_server](https_server/): A HTTPS file server based on [Mongoose Embedded Web Server Library](https://github.com/cesanta/mongoose).
* [grpc](grpc/): A client and server communicating through [gRPC](https://grpc.io). * [grpc](grpc/): A client and server communicating through [gRPC](https://grpc.io).
* [openvino](openvino/) A benchmark of [OpenVINO Inference Engine](https://docs.openvinotoolkit.org/2019_R3/_docs_IE_DG_inference_engine_intro.html). * [openvino](openvino/) A benchmark of [OpenVINO Inference Engine](https://docs.openvinotoolkit.org/2019_R3/_docs_IE_DG_inference_engine_intro.html).
* [pytorch](pytorch/): A demo of [PyTorch](https://pytorch.org/).
* [redis](redis/): A demo of [redis](https://redis.io). * [redis](redis/): A demo of [redis](https://redis.io).
* [sqlite](sqlite/) A demo of [SQLite](https://www.sqlite.org) SQL database engine. * [sqlite](sqlite/) A demo of [SQLite](https://www.sqlite.org) SQL database engine.
* [tensorflow_lite](tensorflow_lite/): A demo and benchmark of [Tensorflow Lite](https://www.tensorflow.org/lite) inference engine. * [tensorflow_lite](tensorflow_lite/): A demo and benchmark of [Tensorflow Lite](https://www.tensorflow.org/lite) inference engine.

32
demos/pytorch/README.md Normal file

@ -0,0 +1,32 @@
# Use PyTorch with Python and Occlum
This project demonstrates how Occlum enables _unmodified_ [PyTorch](https://pytorch.org/) programs running in SGX enclaves, on the basis of _unmodified_ [Python](https://www.python.org).
## Sample Code: Linear model
Use the nn package to define our model as a sequence of layers. nn.Sequential is a Module which contains other Modules, and applies them in sequence to produce its output. Each Linear Module computes output from input using a linear function, and holds internal Tensors for its weight and bias.
## How to Run
This tutorial is written under the assumption that you have Docker installed and use Occlum in a Docker container.
Occlum is compatible with glibc-supported Python, we employ miniconda as python installation tool. You can import PyTorch packages using conda. Here, miniconda is automatically installed by install_python_with_conda.sh script, the required python and PyTorch packages for this project are also loaded by this script. Here, we take occlum/occlum:0.22.0-ubuntu18.04 as example.
Step 1 (on the host): Start an Occlum container
```
docker pull occlum/occlum:0.22.0-ubuntu18.04
docker run -it --name=pythonDemo --device /dev/sgx/enclave occlum/occlum:0.22.0-ubuntu18.04 bash
```
Step 2 (on the host): Download miniconda and install python to prefix position.
```
cd /root/occlum/demos/pytorch
bash ./install_python_with_conda.sh
```
Step 3 (on the host): Run the sample code on Occlum
```
cd /root/occlum/demos/pytorch
bash ./run_pytorch_on_occlum.sh
```

56
demos/pytorch/demo.py Normal file

@ -0,0 +1,56 @@
# Tutorial code from https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-nn.
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction="sum")
learning_rate = 1e-4
print("Training...")
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
print("Done")

@ -0,0 +1,11 @@
#!/bin/bash
set -e
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# 1. Init occlum workspace
[ -d occlum_instance ] || occlum new occlum_instance
# 2. Install python and dependencies to specified position
[ -d Miniconda3-latest-Linux-x86_64.sh ] || wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
[ -d miniconda ] || bash ./Miniconda3-latest-Linux-x86_64.sh -b -p $script_dir/miniconda
$script_dir/miniconda/bin/conda create --prefix $script_dir/occlum_instance/image/opt/python-occlum -y python=3.7 pytorch torchvision -c pytorch

@ -0,0 +1,33 @@
#!/bin/bash
set -e
BLUE='\033[1;34m'
NC='\033[0m'
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
python_dir="$script_dir/occlum_instance/image/opt/python-occlum"
if [ ! -d $python_dir ];then
echo "Error: cannot stat '$python_dir' directory"
exit 1
fi
cd occlum_instance
# Copy files into Occlum Workspace and build
if [ ! -d "image/lib/python3" ];then
ln -s /opt/python-occlum/bin/python3 image/bin/python3
cp -f /opt/occlum/glibc/lib/libdl.so.2 image/opt/occlum/glibc/lib/
cp -f /opt/occlum/glibc/lib/libutil.so.1 image/opt/occlum/glibc/lib/
cp -f /opt/occlum/glibc/lib/librt.so.1 image/opt/occlum/glibc/lib/
cp -f ../demo.py image
new_json="$(jq '.resource_limits.user_space_size = "6000MB" |
.resource_limits.kernel_space_heap_size = "256MB" |
.process.default_mmap_size = "4000MB" |
.env.default += ["PYTHONHOME=/opt/python-occlum"]' Occlum.json)" && \
echo "${new_json}" > Occlum.json
occlum build
fi
# Run the python demo
echo -e "${BLUE}occlum run /bin/python3 demo.py${NC}"
occlum run /bin/python3 demo.py