Add Occlum POC example

This commit is contained in:
Zheng, Qi 2022-03-30 15:31:12 +08:00 committed by Zongmin.Gu
parent 538d862345
commit ff48b7d807
23 changed files with 941 additions and 0 deletions

@ -227,6 +227,12 @@ Step 4-5 are to be done on the guest OS running inside the Docker container:
Alternatively, to use Occlum without Docker, one can install Occlum on popular Linux distributions like Ubuntu and CentOS with the Occlum DEB and RPM packages, respectively. These packages are provided for every release of Occlum since `0.16.0`. For more info about the packages, see [here](docs/install_occlum_packages.md). Alternatively, to use Occlum without Docker, one can install Occlum on popular Linux distributions like Ubuntu and CentOS with the Occlum DEB and RPM packages, respectively. These packages are provided for every release of Occlum since `0.16.0`. For more info about the packages, see [here](docs/install_occlum_packages.md).
## Demos and example
There are many different projects to demonstrate how Occlum can be used to build and run user applications, which could be found on [`demos`](./demos/).
There is also a whole-flow confidential inference service [`example`](./example/) to demonstrate how to convert a real application directly from Docker image to Occlum image, how to integrate Occlum [`Init-RA`](./demos/remote_attestation/init_ra_flow/) solution for whole-flow sensitive data protection plus how to generate and run the Docker container based Occlum instances.
## How to Build? ## How to Build?
To build Occlum from the latest source code, do the following steps in an Occlum Docker container (which can be prepared as shown in the last section): To build Occlum from the latest source code, do the following steps in an Occlum Docker container (which can be prepared as shown in the last section):

138
example/README.md Normal file

@ -0,0 +1,138 @@
# Confidential Inference Service
This example introduces the development and deployment of a whole-flow confidential inference service case (`Tensorflow-serving`). By referring to this framework, application developers could get below benefits.
* Directly transfer the application to Occlum TEE application.
* No SGX remote attestation development required but still have whole-flow sensitive data protection.
## Highlights
* Whole-flow sensitive data protection by utilizing the Occlum [`Init-RA`](../demos/remote_attestation/init_ra_flow/) solution.
* Directly generate inference service (`Tensorflow-serving`) running in TEE from Docker image (`tensorflow/serving`) without modification.
* Way to build out the Docker container image in minimum size based on the Occlum package.
## Overview
![Arch Overview](./overview.png)
The GRPC-RATLS server holds some sensitive data thus it is usually deployed on secure environment. The application consuming the sensitive data could be deployed on general environment, such as Cloud service vendor provided SGX2 instance. There is no HW SGX requirement for the inference requester. For this example, all are running on one SGX2 instance.
### Flow
#### Step 1
GRPC-RATLS server starts and gets ready for any secret request through GRPC channel. In this example, it is `localhost:50051` in default.
In this example, two secrets need to be protected.
* **`ssl_config`**
It is a tensorflow-serving required SSL config file to set up a secure gRPC channel. It is generated by combining `server.key` and `server.crt`. The `server.key` is a private key and `server.crt` is a self-signed certificate, both are generated by `openssl`. Details please refer to script [`generate_ssl_config.ssh`](./generate_ssl_config.sh).
* **`image_key`**
It is used to encrypt/decrypt the Occlum application RootFS image which is Tensorflow-serving in this example. It is generated by command `occlum gen-image-key image_key`. The image encryption could be done by `occlum build --image-key image-key`. With this encryption, anything saved in the RootFS has a good protection.
#### Step 2
Application starts. First it starts the `init` process. This customized [`init`](./init_ra/) requests `ssl_config` and `image_key` from GRPC-RATLS server through a secure GRPC RATLS connection. Then it uses the `image_key` to decrypt the RootFS where the real application is located, mount the RootFS, save the `ssl_config` to RootFS `/etc/tf_ssl.cfg`.
Detail description of the above two steps Init-RA operation could refer to [`Init-RA`](../demos/remote_attestation/init_ra_flow/).
#### Step 3
The real application `tensorflow_model_server` starts with `tf_ssl.cfg` and prefetched model, serves an inference service through secure GRPC channel which is `localhost:9000` in this example.
Extra model_key could be added to protect the models if necessary. (not included in this demo)
#### Step 4
Now users could send inference request with server certificates (`server.crt`).
## How-to build
Our target is to deploy the demo in separated container images, so docker build is necessary steps. Thanks to the `docker run in docker` method, this example build could be done in Occlum development container image.
First, please make sure `docker` is installed successfully in your host. Then start the Occlum container (use version `0.27.0-ubuntu20.04` for example) as below.
```
$ sudo docker run --rm -itd --network host \
-v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock \
occlum/occlum:0.27.0-ubuntu20.04
```
All the following are running in the above container.
### Build all the content
This step prepares all the content and builds the Occlum images.
```
# ./build_content.sh localhost 50051
```
Parameters `localhost` and `50051` indicate the network domain and port for the GRPC server.
Users could modify them depending on the real case situation.
Below are the two Occlum images.
* **occlum_server**
It works as the role of GRPC-RATLS server.
The primary content are from demo [`ra_tls`](../demos/ra_tls).
* **occlum_tf**
It works as the role of Init-RA and tensorflow-serving.
For the tensorflow-serving, there is no need rebuild from source, just use the one from docker image `tensorflow/serving`. This example combines the docker image export and Occlum `copy_bom` tool to generate a workable tensorflow-serving Occlum image. Details please refer to the script [`build_content.sh`](./build_content.sh).
### Build runtime container images
Once all content ready, runtime container images build are good to go.
This step builds two container images, `init_ra_server` and `tf_demo`.
```
# ./build_container_images.sh <registry>
```
`<registry>` means the docker registry prefix for the generated container images.
For example, using `demo` here will generate container images:
```
demo/init_ra_server
demo/tf_demo
```
To minimize the size of the container images, only necessary SGX libraries and runtime Occlum RPM got installed, plus the packaged Occlum image. The build script and Dockerfile are in directory [`container`](./container/).
## How-to run
### Start the tensorflow serving
Once the container images are ready, demo could be started in the host.
Script [`run_container.sh`](./run_container.sh) is provided to run the container images one by one.
```
$ ./run_container.sh -h
Run container images init_ra_server and tf_demo on background.
usage: run_container.sh [OPTION]...
-s <GRPC Server Domain> default localhost.
-p <GRPC Server port> default 50051.
-u <PCCS URL> default https://localhost:8081/sgx/certification/v3/.
-r <registry prefix> the registry for this demo container images.
-h <usage> usage help
```
For example, using PCCS service from aliyun.
```
$ sudo ./run_container.sh -s localhost -p 50051 -u https://sgx-dcap-server.cn-shanghai.aliyuncs.com/sgx/certification/v3/ -r demo
```
If everything goes well, the tensorflow serving service would be available by GRPC secure channel `localhost:9000`.
### Try the inference request
There is an example python based [`inference client`](./client/inception_client.py) which sends a picture to tensorflow serving service to do inference with previously generated server certificate.
```
# cd client
# python3 inception_client.py --server=localhost:9000 --crt ../ssl_configure/server.crt --image cat.jpg
```

@ -0,0 +1,19 @@
#!/bin/bash
set -e
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
registry=${1:-demo}
pushd ${script_dir}
echo "Build Occlum init-ra Server runtime container image ..."
./container/build_image.sh \
-i ./occlum_server/occlum_instance.tar.gz \
-n init_ra_server -r ${registry}
echo "Build Occlum Tensorflow-serving runtime container image ..."
./container/build_image.sh \
-i ./occlum_tf/occlum_instance.tar.gz \
-n tf_demo -r ${registry}
popd

155
example/build_content.sh Executable file

@ -0,0 +1,155 @@
#!/bin/bash
set -e
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
export DEP_LIBS_DIR="${script_dir}/dep_libs"
export INITRA_DIR="${script_dir}/init_ra"
export RATLS_DIR="${script_dir}/../demos/ra_tls"
export TF_DIR="${script_dir}/tf_serving"
GRPC_SERVER_DOMAIN=${1:-localhost}
GRPC_SERVER_PORT=${2:-50051}
function build_ratls()
{
rm -rf ${DEP_LIBS_DIR} && mkdir ${DEP_LIBS_DIR}
pushd ${RATLS_DIR}
./download_and_prepare.sh
./build_and_install.sh musl
./build_occlum_instance.sh musl
cp ./grpc-src/examples/cpp/ratls/build/libgrpc_ratls_client.so ${DEP_LIBS_DIR}/
cp ./grpc-src/examples/cpp/ratls/build/libhw_grpc_proto.so ${DEP_LIBS_DIR}/
popd
}
function build_tf_serving()
{
# Dump tensorflow/serving container rootfs content
./dump_rootfs.sh -i tensorflow/serving -d ${TF_DIR} -g 2.5.1
pushd ${TF_DIR}
# Download pretrained inception model
rm -rf INCEPTION*
curl -O https://s3-us-west-2.amazonaws.com/tf-test-models/INCEPTION.zip
unzip INCEPTION.zip
popd
}
function build_init_ra()
{
pushd ${INITRA_DIR}
occlum-cargo clean
occlum-cargo build --release
popd
}
function build_tf_instance()
{
# generate tf image key
occlum gen-image-key image_key
rm -rf occlum_tf && occlum new occlum_tf
pushd occlum_tf
# prepare tf_serving content
rm -rf image
copy_bom -f ../tf_serving.yaml --root image --include-dir /opt/occlum/etc/template
new_json="$(jq '.resource_limits.user_space_size = "7000MB" |
.resource_limits.kernel_space_heap_size="384MB" |
.process.default_heap_size = "128MB" |
.resource_limits.max_num_of_threads = 64 |
.metadata.debuggable = false |
.env.default += ["GRPC_SERVER=localhost:50051"]' Occlum.json)" && \
echo "${new_json}" > Occlum.json
# Update GRPC_SERVER env
GRPC_SERVER="${GRPC_SERVER_DOMAIN}:${GRPC_SERVER_PORT}"
sed -i "s/localhost:50051/$GRPC_SERVER/g" Occlum.json
occlum build --image-key ../image_key
# Get server mrsigner.
# Here client and server use the same signer-key thus using client mrsigner directly.
jq ' .verify_mr_enclave = "off" |
.verify_mr_signer = "on" |
.verify_isv_prod_id = "off" |
.verify_isv_svn = "off" |
.verify_enclave_debuggable = "on" |
.sgx_mrs[0].mr_signer = ''"'`get_mr tf mr_signer`'" |
.sgx_mrs[0].debuggable = false ' ../ra_config_template.json > dynamic_config.json
# prepare init-ra content
rm -rf initfs
copy_bom -f ../init_ra_client.yaml --root initfs --include-dir /opt/occlum/etc/template
# Set GRPC_SERVER_DOMAIN to the hosts
# echo "$IP ${GRPC_SERVER_DOMAIN}" >> initfs/etc/hosts
occlum build -f --image-key ../image_key
occlum package occlum_instance
popd
}
function get_mr() {
sgx_sign dump -enclave ${script_dir}/occlum_$1/build/lib/libocclum-libos.signed.so -dumpfile ../metadata_info_$1.txt
if [ "$2" == "mr_enclave" ]; then
sed -n -e '/enclave_hash.m/,/metadata->enclave_css.body.isv_prod_id/p' ../metadata_info_$1.txt |head -3|tail -2|xargs|sed 's/0x//g'|sed 's/ //g'
elif [ "$2" == "mr_signer" ]; then
tail -2 ../metadata_info_$1.txt |xargs|sed 's/0x//g'|sed 's/ //g'
fi
}
function gen_secret_json() {
# First generate cert/key by openssl
./generate_ssl_config.sh localhost
# Then do base64 encode
ssl_config=$(base64 -w 0 ssl_configure/ssl.cfg)
image_key=$(base64 -w 0 image_key)
# Then generate secret json
jq -n --arg ssl_config "$ssl_config" --arg image_key "$image_key" \
'{"ssl_config": $ssl_config, "image_key": $image_key}' > secret_config.json
}
function build_server_instance()
{
gen_secret_json
rm -rf occlum_server && occlum new occlum_server
pushd occlum_server
jq '.verify_mr_enclave = "on" |
.verify_mr_signer = "on" |
.verify_isv_prod_id = "off" |
.verify_isv_svn = "off" |
.verify_enclave_debuggable = "on" |
.sgx_mrs[0].mr_enclave = ''"'`get_mr tf mr_enclave`'" |
.sgx_mrs[0].mr_signer = ''"'`get_mr tf mr_signer`'" |
.sgx_mrs[0].debuggable = false ' ../ra_config_template.json > dynamic_config.json
new_json="$(jq '.resource_limits.user_space_size = "500MB" |
.metadata.debuggable = false ' Occlum.json)" && \
echo "${new_json}" > Occlum.json
rm -rf image
copy_bom -f ../ra_server.yaml --root image --include-dir /opt/occlum/etc/template
# Set GRPC_SERVER_DOMAIN to the hosts
# echo "$IP ${GRPC_SERVER_DOMAIN} " >> image/etc/hosts
occlum build
occlum package occlum_instance
popd
}
build_ratls
build_tf_serving
build_init_ra
build_tf_instance
build_server_instance

BIN
example/client/cat.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

@ -0,0 +1,44 @@
from __future__ import print_function
import grpc
import tensorflow as tf
import argparse
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
def main():
with open(args.crt, 'rb') as f:
creds = grpc.ssl_channel_credentials(f.read())
channel = grpc.secure_channel(args.server, creds)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
# Send request
with open(args.image, 'rb') as f:
# See prediction_service.proto for gRPC request/response details.
request = predict_pb2.PredictRequest()
request.model_spec.name = 'INCEPTION'
request.model_spec.signature_name = 'predict_images'
input_name = 'images'
input_shape = [1]
input_data = f.read()
request.inputs[input_name].CopyFrom(
tf.make_tensor_proto(input_data, shape=input_shape))
result = stub.Predict(request, 10.0) # 10 secs timeout
print(result)
print("Inception Client Passed")
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--server', default='localhost:9000',
help='Tenforflow Model Server Address')
parser.add_argument('--crt', default=None, type=str, help='TLS certificate file path')
parser.add_argument('--image', default='Siberian_Husky_bi-eyed_Flickr.jpg',
help='Path to the image')
args = parser.parse_args()
main()

@ -0,0 +1,3 @@
grpcio>=1.34.0
tensorflow>=2.3.0
tensorflow-serving-api>=2.3.0

@ -0,0 +1,30 @@
FROM ubuntu:20.04
LABEL maintainer="Qi Zheng <huaiqing.zq@antgroup.com>"
# Install SGX DCAP and Occlum runtime
ARG PSW_VERSION=2.15.101.1
ARG DCAP_VERSION=1.12.101.1
ENV APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=1
RUN apt update && DEBIAN_FRONTEND="noninteractive" apt install -y --no-install-recommends gnupg wget ca-certificates jq && \
echo 'deb [arch=amd64] https://download.01.org/intel-sgx/sgx_repo/ubuntu focal main' | tee /etc/apt/sources.list.d/intel-sgx.list && \
wget -qO - https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | apt-key add - && \
echo 'deb [arch=amd64] https://occlum.io/occlum-package-repos/debian focal main' | tee /etc/apt/sources.list.d/occlum.list && \
wget -qO - https://occlum.io/occlum-package-repos/debian/public.key | apt-key add - && \
apt update && \
apt install -y libsgx-uae-service=$PSW_VERSION-focal1 && \
apt install -y libsgx-dcap-ql=$DCAP_VERSION-focal1 && \
apt install -y libsgx-dcap-default-qpl=$DCAP_VERSION-focal1 && \
apt install -y occlum-runtime && \
apt clean && \
rm -rf /var/lib/apt/lists/*
ENV PATH="/opt/occlum/build/bin:/usr/local/occlum/bin:$PATH"
# Users need build their own applications and generate occlum package first.
ARG OCCLUM_PACKAGE
ADD $OCCLUM_PACKAGE /
COPY container/docker-entrypoint.sh /usr/local/bin/
ENV PCCS_URL="https://localhost:8081/sgx/certification/v3/"
ENTRYPOINT ["docker-entrypoint.sh"]
WORKDIR /occlum_instance

@ -0,0 +1,57 @@
#!/bin/bash
scripts_dir=$(readlink -f $(dirname "${BASH_SOURCE[0]}"))
top_dir=$(dirname "${scripts_dir}")
registry="$(whoami)"
tag="latest"
function usage {
cat << EOM
usage: $(basename "$0") [OPTION]...
-i <occlum package> the occlum instance tar package after doing "occlum package"
-r <registry prefix> the prefix string for registry
-n <container image name>
-g <tag> container image tag
-h <usage> usage help
EOM
exit 0
}
function process_args {
while getopts ":i:r:n:g:h" option; do
case "${option}" in
i) package=${OPTARG};;
r) registry=${OPTARG};;
n) name=${OPTARG};;
g) tag=${OPTARG};;
h) usage;;
esac
done
if [[ "${package}" == "" ]]; then
echo "Error: Please specify your occlum instance package via -i <occlum package>."
exit 1
fi
if [[ "${name}" == "" ]]; then
echo "Error: Please specify your container image name via -n <container image name>."
exit 1
fi
}
function build_docker_occlum_image {
cd ${top_dir}
echo "Build docker Occlum image based on ${package} ..."
sudo -E docker build \
--network host \
--build-arg http_proxy=$http_proxy \
--build-arg https_proxy=$https_proxy \
--build-arg OCCLUM_PACKAGE=${package} \
-f container/Dockerfile_occlum_instance.ubuntu20.04 . \
-t ${registry}/${name}:${tag}
}
process_args "$@"
build_docker_occlum_image

@ -0,0 +1,7 @@
#!/bin/bash
# Update PCCS_URL
line=$(grep -n "PCCS_URL" /etc/sgx_default_qcnl.conf | cut -d ":" -f 1)
sed -i "${line}c PCCS_URL=${PCCS_URL}" /etc/sgx_default_qcnl.conf
exec "$@"

46
example/dump_rootfs.sh Executable file

@ -0,0 +1,46 @@
#!/bin/bash
set -e
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
tag="latest"
dest=${script_dir}
function usage {
cat << EOM
Dump rootfs content from the specified container image.
usage: $(basename "$0") [OPTION]...
-i <container image name> the container image name
-g <tag> container image tag
-d <destination> the directiory to put dumped rootfs
-h <usage> usage help
EOM
exit 0
}
function process_args {
while getopts ":i:g:d:h" option; do
case "${option}" in
i) container=${OPTARG};;
g) tag=${OPTARG};;
d) dest=${OPTARG};;
h) usage;;
esac
done
if [[ "${container}" == "" ]]; then
echo "Error: Please specify the container image -i <container image name>."
exit 1
fi
}
process_args "$@"
rm -rf rootfs.tar
docker export $(docker create --network host --name rootfs_dump ${container}:${tag}) -o rootfs.tar
docker rm rootfs_dump
rm -rf ${dest}/rootfs && mkdir -p ${dest}/rootfs
tar xf rootfs.tar -C ${dest}/rootfs
echo "Successfully dumped ${container}:${tag} rootfs to ${dest}/rootfs."

25
example/generate_ssl_config.sh Executable file

@ -0,0 +1,25 @@
service_domain_name=${1:-"localhost"}
rm -rf ssl_configure
mkdir ssl_configure
cd ssl_configure
# https://kubernetes.github.io/ingress-nginx/examples/PREREQUISITES/#client-certificate-authentication
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout server.key -out server.crt -subj "/CN=${service_domain_name}"
# Generate tls configure
## https://stackoverflow.com/questions/59199419/using-tensorflow-model-server-with-ssl-configuration
echo "server_key: '`cat server.key | paste -d "" -s`'" >> ssl.cfg
echo "server_cert: '`cat server.crt | paste -d "" -s`'" >> ssl.cfg
echo "client_verify: false" >> ssl.cfg
sed -i "s/-----BEGIN PRIVATE KEY-----/-----BEGIN PRIVATE KEY-----\\\n/g" ssl.cfg
sed -i "s/-----END PRIVATE KEY-----/\\\n-----END PRIVATE KEY-----/g" ssl.cfg
sed -i "s/-----BEGIN CERTIFICATE-----/-----BEGIN CERTIFICATE-----\\\n/g" ssl.cfg
sed -i "s/-----END CERTIFICATE-----/\\\n-----END CERTIFICATE-----/g" ssl.cfg
echo "Generate server.key server.crt and ssl.cfg successfully!"
#cat ssl.cfg
cd -

94
example/init_ra/Cargo.lock generated Normal file

@ -0,0 +1,94 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
[[package]]
name = "init"
version = "0.0.1"
dependencies = [
"libc",
"serde",
"serde_json",
]
[[package]]
name = "itoa"
version = "0.4.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dd25036021b0de88a0aff6b850051563c6516d0bf53f8638938edbb9de732736"
[[package]]
name = "libc"
version = "0.2.84"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1cca32fa0182e8c0989459524dc356b8f2b5c10f1b9eb521b7d182c03cf8c5ff"
[[package]]
name = "proc-macro2"
version = "1.0.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e0704ee1a7e00d7bb417d0770ea303c1bccbabf0ef1667dae92b5967f5f8a71"
dependencies = [
"unicode-xid",
]
[[package]]
name = "quote"
version = "1.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3d0b9745dc2debf507c8422de05d7226cc1f0644216dfdfead988f9b1ab32a7"
dependencies = [
"proc-macro2",
]
[[package]]
name = "ryu"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "71d301d4193d031abdd79ff7e3dd721168a9572ef3fe51a1517aba235bd8f86e"
[[package]]
name = "serde"
version = "1.0.123"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "92d5161132722baa40d802cc70b15262b98258453e85e5d1d365c757c73869ae"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.123"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9391c295d64fc0abb2c556bad848f33cb8296276b1ad2677d1ae1ace4f258f31"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.62"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ea1c6153794552ea7cf7cf63b1231a25de00ec90db326ba6264440fa08e31486"
dependencies = [
"itoa",
"ryu",
"serde",
]
[[package]]
name = "syn"
version = "1.0.60"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c700597eca8a5a762beb35753ef6b94df201c81cca676604f547495a0d7f0081"
dependencies = [
"proc-macro2",
"quote",
"unicode-xid",
]
[[package]]
name = "unicode-xid"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f7fe0bb3479651439c9112f72b6c505038574c9fbb575ed1bf3b797fa39dd564"

@ -0,0 +1,11 @@
[package]
name = "init"
version = "0.0.1"
build = "build.rs"
authors = ["LI Qing geding.lq@antgroup.com"]
edition = "2018"
[dependencies]
libc = "0.2.84"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"

5
example/init_ra/build.rs Normal file

@ -0,0 +1,5 @@
fn main() {
println!("cargo:rustc-link-search=native=../dep_libs");
println!("cargo:rustc-link-lib=dylib=grpc_ratls_client");
println!("cargo:rustc-link-lib=dylib=hw_grpc_proto");
}

159
example/init_ra/src/main.rs Normal file

@ -0,0 +1,159 @@
extern crate libc;
extern crate serde;
extern crate serde_json;
use libc::syscall;
use serde::Deserialize;
use std::env;
use std::error::Error;
use std::fs;
use std::fs::File;
use std::io::{ErrorKind, Read};
use std::ffi::CString;
use std::os::raw::{c_int, c_char};
#[link(name = "grpc_ratls_client")]
extern "C" {
fn grpc_ratls_get_secret(
server_addr: *const c_char, // grpc server address+port, such as "localhost:50051"
config_json: *const c_char, // ratls handshake config json file
name: *const c_char, // secret name to be requested
secret_file: *const c_char // secret file to be saved
) -> c_int;
}
fn main() -> Result<(), Box<dyn Error>> {
// Load the configuration from initfs
const IMAGE_CONFIG_FILE: &str = "/etc/image_config.json";
let image_config = load_config(IMAGE_CONFIG_FILE)?;
// Get the MAC of Occlum.json.protected file
let occlum_json_mac = {
let mut mac: sgx_aes_gcm_128bit_tag_t = Default::default();
parse_str_to_bytes(&image_config.occlum_json_mac, &mut mac)?;
mac
};
let occlum_json_mac_ptr = &occlum_json_mac as *const sgx_aes_gcm_128bit_tag_t;
// Get grpc server address from environment GRPC_SERVER
let server_addr = CString::new(
env::var("GRPC_SERVER").unwrap_or("localhost:50051".to_string()))
.unwrap();
let config_json = CString::new("dynamic_config.json").unwrap();
// Get the key of FS image if needed
let key = match &image_config.image_type[..] {
"encrypted" => {
// Get the image encrypted key through RA
let secret = CString::new("image_key").unwrap();
let filename = CString::new("/etc/image_key").unwrap();
let ret = unsafe {
grpc_ratls_get_secret(
server_addr.as_ptr(),
config_json.as_ptr(),
secret.as_ptr(),
filename.as_ptr())
};
if ret != 0 {
println!("grpc_ratls_get_secret failed return {}", ret);
return Err(Box::new(std::io::Error::last_os_error()));
}
const IMAGE_KEY_FILE: &str = "/etc/image_key";
let key_str = load_key(IMAGE_KEY_FILE)?;
let mut key: sgx_key_128bit_t = Default::default();
parse_str_to_bytes(&key_str, &mut key)?;
Some(key)
}
"integrity-only" => None,
_ => unreachable!(),
};
let key_ptr = key
.as_ref()
.map(|key| key as *const sgx_key_128bit_t)
.unwrap_or(std::ptr::null());
// Get certificate
let secret = CString::new("ssl_config").unwrap();
let filename = CString::new("ssl_file").unwrap();
let ret = unsafe {
grpc_ratls_get_secret(
server_addr.as_ptr(),
config_json.as_ptr(),
secret.as_ptr(),
filename.as_ptr())
};
if ret != 0 {
println!("grpc_ratls_get_secret failed return {}", ret);
return Err(Box::new(std::io::Error::last_os_error()));
}
let ssl_secret = fs::read_to_string(filename.into_string().unwrap())
.expect("Something went wrong reading the file");
// Mount the image
const SYS_MOUNT_FS: i64 = 363;
let ret = unsafe { syscall(SYS_MOUNT_FS, key_ptr, occlum_json_mac_ptr) };
if ret < 0 {
return Err(Box::new(std::io::Error::last_os_error()));
}
// Write the secrets to rootfs
fs::write("/etc/tf_ssl.cfg", ssl_secret.into_bytes())?;
Ok(())
}
#[allow(non_camel_case_types)]
type sgx_key_128bit_t = [u8; 16];
#[allow(non_camel_case_types)]
type sgx_aes_gcm_128bit_tag_t = [u8; 16];
#[derive(Deserialize, Debug)]
#[serde(deny_unknown_fields)]
struct ImageConfig {
occlum_json_mac: String,
image_type: String,
}
fn load_config(config_path: &str) -> Result<ImageConfig, Box<dyn Error>> {
let mut config_file = File::open(config_path)?;
let config_json = {
let mut config_json = String::new();
config_file.read_to_string(&mut config_json)?;
config_json
};
let config: ImageConfig = serde_json::from_str(&config_json)?;
Ok(config)
}
fn load_key(key_path: &str) -> Result<String, Box<dyn Error>> {
let mut key_file = File::open(key_path)?;
let mut key = String::new();
key_file.read_to_string(&mut key)?;
Ok(key.trim_end_matches(|c| c == '\r' || c == '\n').to_string())
}
fn parse_str_to_bytes(arg_str: &str, bytes: &mut [u8]) -> Result<(), Box<dyn Error>> {
let bytes_str_vec = {
let bytes_str_vec: Vec<&str> = arg_str.split('-').collect();
if bytes_str_vec.len() != bytes.len() {
return Err(Box::new(std::io::Error::new(
ErrorKind::InvalidData,
"The length or format of Key/MAC string is invalid",
)));
}
bytes_str_vec
};
for (byte_i, byte_str) in bytes_str_vec.iter().enumerate() {
bytes[byte_i] = u8::from_str_radix(byte_str, 16)?;
}
Ok(())
}

@ -0,0 +1,19 @@
includes:
- base.yaml
targets:
- target: /bin/
copy:
- files:
- ${INITRA_DIR}/target/x86_64-unknown-linux-musl/release/init
- target: /lib/
copy:
- files:
- ${DEP_LIBS_DIR}/libgrpc_ratls_client.so
- target: /
copy:
- files:
- dynamic_config.json
- target: /usr/share/grpc/
copy:
- files:
- ${RATLS_DIR}/grpc-src/etc/roots.pem

BIN
example/overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 269 KiB

@ -0,0 +1,16 @@
{
"verify_mr_enclave" : "on",
"verify_mr_signer" : "on",
"verify_isv_prod_id" : "on",
"verify_isv_svn" : "on",
"verify_enclave_debuggable" : "on",
"sgx_mrs": [
{
"mr_enclave" : "",
"mr_signer" : "",
"isv_prod_id" : "0",
"isv_svn" : "0",
"debuggable" : false
}
]
}

16
example/ra_server.yaml Normal file

@ -0,0 +1,16 @@
includes:
- base.yaml
targets:
- target: /bin/
copy:
- files:
- ${RATLS_DIR}/grpc-src/examples/cpp/ratls/build/server
- target: /
copy:
- files:
- dynamic_config.json
- ../secret_config.json
- target: /usr/share/grpc/
copy:
- files:
- ${RATLS_DIR}/grpc-src/etc/roots.pem

22
example/run.sh Executable file

@ -0,0 +1,22 @@
#!/bin/bash
set -e
GRPC_SERVER_DOMAIN=${1:-localhost}
GRPC_SERVER_PORT=${2:-50051}
GRPC_SERVER="${GRPC_SERVER_DOMAIN}:${GRPC_SERVER_PORT}"
echo "Start GRPC server on backgound ..."
pushd occlum_server
occlum run /bin/server ${GRPC_SERVER} &
popd
sleep 3
echo "Start Tensorflow-Serving on backgound ..."
pushd occlum_tf
taskset -c 0,1 occlum run /bin/tensorflow_model_server \
--model_name=INCEPTION --model_base_path=/model/INCEPTION/INCEPTION \
--port=9000 --ssl_config_file="/etc/tf_ssl.cfg"
popd

56
example/run_container.sh Executable file

@ -0,0 +1,56 @@
#!/bin/bash
set -e
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
grpc_domain=localhost
grpc_port=50051
pccs_url="https://localhost:8081/sgx/certification/v3/"
registry="demo"
function usage {
cat << EOM
Run container images init_ra_server and tf_demo on background.
usage: $(basename "$0") [OPTION]...
-s <GRPC Server Domain> default localhost.
-p <GRPC Server port> default 50051.
-u <PCCS URL> default https://localhost:8081/sgx/certification/v3/.
-r <registry prefix> the registry for this demo container images.
-h <usage> usage help
EOM
exit 0
}
function process_args {
while getopts ":s:p:u:r:h" option; do
case "${option}" in
s) grpc_domain=${OPTARG};;
p) grpc_port=${OPTARG};;
u) pccs_url=${OPTARG};;
r) registry=${OPTARG};;
h) usage;;
esac
done
}
process_args "$@"
echo "Start GRPC server on backgound ..."
docker run --network host \
--device /dev/sgx/enclave --device /dev/sgx/provision \
--env PCCS_URL=${pccs_url} \
${registry}/init_ra_server \
occlum run /bin/server ${grpc_domain}:${grpc_port} &
sleep 3
echo "Start Tensorflow-Serving on backgound ..."
docker run --network host \
--device /dev/sgx/enclave --device /dev/sgx/provision \
--env PCCS_URL=${pccs_url} \
${registry}/tf_demo \
taskset -c 0,1 occlum run /bin/tensorflow_model_server \
--model_name=INCEPTION --model_base_path=/model/INCEPTION/INCEPTION \
--port=9000 --ssl_config_file="/etc/tf_ssl.cfg" &

13
example/tf_serving.yaml Normal file

@ -0,0 +1,13 @@
includes:
- base.yaml
targets:
# copy model
- target: /model
copy:
- dirs:
- ${TF_DIR}/INCEPTION
- target: /bin
copy:
- files:
- ${TF_DIR}/rootfs/usr/bin/tensorflow_model_server