diff --git a/README.md b/README.md index 1a42d78b..fc62ffc9 100644 --- a/README.md +++ b/README.md @@ -227,6 +227,12 @@ Step 4-5 are to be done on the guest OS running inside the Docker container: Alternatively, to use Occlum without Docker, one can install Occlum on popular Linux distributions like Ubuntu and CentOS with the Occlum DEB and RPM packages, respectively. These packages are provided for every release of Occlum since `0.16.0`. For more info about the packages, see [here](docs/install_occlum_packages.md). +## Demos and example + +There are many different projects to demonstrate how Occlum can be used to build and run user applications, which could be found on [`demos`](./demos/). + +There is also a whole-flow confidential inference service [`example`](./example/) to demonstrate how to convert a real application directly from Docker image to Occlum image, how to integrate Occlum [`Init-RA`](./demos/remote_attestation/init_ra_flow/) solution for whole-flow sensitive data protection plus how to generate and run the Docker container based Occlum instances. + ## How to Build? To build Occlum from the latest source code, do the following steps in an Occlum Docker container (which can be prepared as shown in the last section): diff --git a/example/README.md b/example/README.md new file mode 100644 index 00000000..fcdc0616 --- /dev/null +++ b/example/README.md @@ -0,0 +1,138 @@ +# Confidential Inference Service + +This example introduces the development and deployment of a whole-flow confidential inference service case (`Tensorflow-serving`). By referring to this framework, application developers could get below benefits. + +* Directly transfer the application to Occlum TEE application. +* No SGX remote attestation development required but still have whole-flow sensitive data protection. + +## Highlights + +* Whole-flow sensitive data protection by utilizing the Occlum [`Init-RA`](../demos/remote_attestation/init_ra_flow/) solution. + +* Directly generate inference service (`Tensorflow-serving`) running in TEE from Docker image (`tensorflow/serving`) without modification. + +* Way to build out the Docker container image in minimum size based on the Occlum package. + +## Overview + +![Arch Overview](./overview.png) + +The GRPC-RATLS server holds some sensitive data thus it is usually deployed on secure environment. The application consuming the sensitive data could be deployed on general environment, such as Cloud service vendor provided SGX2 instance. There is no HW SGX requirement for the inference requester. For this example, all are running on one SGX2 instance. + +### Flow + +#### Step 1 + +GRPC-RATLS server starts and gets ready for any secret request through GRPC channel. In this example, it is `localhost:50051` in default. + +In this example, two secrets need to be protected. + +* **`ssl_config`** +It is a tensorflow-serving required SSL config file to set up a secure gRPC channel. It is generated by combining `server.key` and `server.crt`. The `server.key` is a private key and `server.crt` is a self-signed certificate, both are generated by `openssl`. Details please refer to script [`generate_ssl_config.ssh`](./generate_ssl_config.sh). + +* **`image_key`** +It is used to encrypt/decrypt the Occlum application RootFS image which is Tensorflow-serving in this example. It is generated by command `occlum gen-image-key image_key`. The image encryption could be done by `occlum build --image-key image-key`. With this encryption, anything saved in the RootFS has a good protection. + +#### Step 2 + +Application starts. First it starts the `init` process. This customized [`init`](./init_ra/) requests `ssl_config` and `image_key` from GRPC-RATLS server through a secure GRPC RATLS connection. Then it uses the `image_key` to decrypt the RootFS where the real application is located, mount the RootFS, save the `ssl_config` to RootFS `/etc/tf_ssl.cfg`. + +Detail description of the above two steps Init-RA operation could refer to [`Init-RA`](../demos/remote_attestation/init_ra_flow/). + +#### Step 3 + +The real application `tensorflow_model_server` starts with `tf_ssl.cfg` and prefetched model, serves an inference service through secure GRPC channel which is `localhost:9000` in this example. + +Extra model_key could be added to protect the models if necessary. (not included in this demo) + +#### Step 4 + +Now users could send inference request with server certificates (`server.crt`). + +## How-to build + +Our target is to deploy the demo in separated container images, so docker build is necessary steps. Thanks to the `docker run in docker` method, this example build could be done in Occlum development container image. + +First, please make sure `docker` is installed successfully in your host. Then start the Occlum container (use version `0.27.0-ubuntu20.04` for example) as below. +``` +$ sudo docker run --rm -itd --network host \ + -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock \ + occlum/occlum:0.27.0-ubuntu20.04 +``` + +All the following are running in the above container. + +### Build all the content + +This step prepares all the content and builds the Occlum images. + +``` +# ./build_content.sh localhost 50051 +``` + +Parameters `localhost` and `50051` indicate the network domain and port for the GRPC server. +Users could modify them depending on the real case situation. + +Below are the two Occlum images. + +* **occlum_server** + +It works as the role of GRPC-RATLS server. +The primary content are from demo [`ra_tls`](../demos/ra_tls). + +* **occlum_tf** + +It works as the role of Init-RA and tensorflow-serving. + +For the tensorflow-serving, there is no need rebuild from source, just use the one from docker image `tensorflow/serving`. This example combines the docker image export and Occlum `copy_bom` tool to generate a workable tensorflow-serving Occlum image. Details please refer to the script [`build_content.sh`](./build_content.sh). + +### Build runtime container images + +Once all content ready, runtime container images build are good to go. +This step builds two container images, `init_ra_server` and `tf_demo`. +``` +# ./build_container_images.sh +``` + +`` means the docker registry prefix for the generated container images. +For example, using `demo` here will generate container images: +``` +demo/init_ra_server +demo/tf_demo +``` + +To minimize the size of the container images, only necessary SGX libraries and runtime Occlum RPM got installed, plus the packaged Occlum image. The build script and Dockerfile are in directory [`container`](./container/). + +## How-to run + +### Start the tensorflow serving + +Once the container images are ready, demo could be started in the host. + +Script [`run_container.sh`](./run_container.sh) is provided to run the container images one by one. +``` +$ ./run_container.sh -h +Run container images init_ra_server and tf_demo on background. +usage: run_container.sh [OPTION]... + -s default localhost. + -p default 50051. + -u default https://localhost:8081/sgx/certification/v3/. + -r the registry for this demo container images. + -h usage help +``` + +For example, using PCCS service from aliyun. +``` +$ sudo ./run_container.sh -s localhost -p 50051 -u https://sgx-dcap-server.cn-shanghai.aliyuncs.com/sgx/certification/v3/ -r demo +``` + +If everything goes well, the tensorflow serving service would be available by GRPC secure channel `localhost:9000`. + +### Try the inference request + +There is an example python based [`inference client`](./client/inception_client.py) which sends a picture to tensorflow serving service to do inference with previously generated server certificate. + +``` +# cd client +# python3 inception_client.py --server=localhost:9000 --crt ../ssl_configure/server.crt --image cat.jpg +``` diff --git a/example/build_container_images.sh b/example/build_container_images.sh new file mode 100755 index 00000000..19f0c524 --- /dev/null +++ b/example/build_container_images.sh @@ -0,0 +1,19 @@ +#!/bin/bash +set -e + +script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" +registry=${1:-demo} + +pushd ${script_dir} + +echo "Build Occlum init-ra Server runtime container image ..." +./container/build_image.sh \ + -i ./occlum_server/occlum_instance.tar.gz \ + -n init_ra_server -r ${registry} + +echo "Build Occlum Tensorflow-serving runtime container image ..." +./container/build_image.sh \ + -i ./occlum_tf/occlum_instance.tar.gz \ + -n tf_demo -r ${registry} + +popd diff --git a/example/build_content.sh b/example/build_content.sh new file mode 100755 index 00000000..5f368fda --- /dev/null +++ b/example/build_content.sh @@ -0,0 +1,155 @@ +#!/bin/bash +set -e + +script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" + +export DEP_LIBS_DIR="${script_dir}/dep_libs" +export INITRA_DIR="${script_dir}/init_ra" +export RATLS_DIR="${script_dir}/../demos/ra_tls" +export TF_DIR="${script_dir}/tf_serving" + +GRPC_SERVER_DOMAIN=${1:-localhost} +GRPC_SERVER_PORT=${2:-50051} + +function build_ratls() +{ + rm -rf ${DEP_LIBS_DIR} && mkdir ${DEP_LIBS_DIR} + pushd ${RATLS_DIR} + ./download_and_prepare.sh + ./build_and_install.sh musl + ./build_occlum_instance.sh musl + + cp ./grpc-src/examples/cpp/ratls/build/libgrpc_ratls_client.so ${DEP_LIBS_DIR}/ + cp ./grpc-src/examples/cpp/ratls/build/libhw_grpc_proto.so ${DEP_LIBS_DIR}/ + + popd +} + +function build_tf_serving() +{ + # Dump tensorflow/serving container rootfs content + ./dump_rootfs.sh -i tensorflow/serving -d ${TF_DIR} -g 2.5.1 + pushd ${TF_DIR} + # Download pretrained inception model + rm -rf INCEPTION* + curl -O https://s3-us-west-2.amazonaws.com/tf-test-models/INCEPTION.zip + unzip INCEPTION.zip + popd +} + +function build_init_ra() +{ + pushd ${INITRA_DIR} + occlum-cargo clean + occlum-cargo build --release + popd +} + +function build_tf_instance() +{ + # generate tf image key + occlum gen-image-key image_key + + rm -rf occlum_tf && occlum new occlum_tf + pushd occlum_tf + + # prepare tf_serving content + rm -rf image + copy_bom -f ../tf_serving.yaml --root image --include-dir /opt/occlum/etc/template + + new_json="$(jq '.resource_limits.user_space_size = "7000MB" | + .resource_limits.kernel_space_heap_size="384MB" | + .process.default_heap_size = "128MB" | + .resource_limits.max_num_of_threads = 64 | + .metadata.debuggable = false | + .env.default += ["GRPC_SERVER=localhost:50051"]' Occlum.json)" && \ + echo "${new_json}" > Occlum.json + + # Update GRPC_SERVER env + GRPC_SERVER="${GRPC_SERVER_DOMAIN}:${GRPC_SERVER_PORT}" + sed -i "s/localhost:50051/$GRPC_SERVER/g" Occlum.json + + occlum build --image-key ../image_key + + # Get server mrsigner. + # Here client and server use the same signer-key thus using client mrsigner directly. + jq ' .verify_mr_enclave = "off" | + .verify_mr_signer = "on" | + .verify_isv_prod_id = "off" | + .verify_isv_svn = "off" | + .verify_enclave_debuggable = "on" | + .sgx_mrs[0].mr_signer = ''"'`get_mr tf mr_signer`'" | + .sgx_mrs[0].debuggable = false ' ../ra_config_template.json > dynamic_config.json + + # prepare init-ra content + rm -rf initfs + copy_bom -f ../init_ra_client.yaml --root initfs --include-dir /opt/occlum/etc/template + + # Set GRPC_SERVER_DOMAIN to the hosts + # echo "$IP ${GRPC_SERVER_DOMAIN}" >> initfs/etc/hosts + + occlum build -f --image-key ../image_key + occlum package occlum_instance + + popd +} + +function get_mr() { + sgx_sign dump -enclave ${script_dir}/occlum_$1/build/lib/libocclum-libos.signed.so -dumpfile ../metadata_info_$1.txt + if [ "$2" == "mr_enclave" ]; then + sed -n -e '/enclave_hash.m/,/metadata->enclave_css.body.isv_prod_id/p' ../metadata_info_$1.txt |head -3|tail -2|xargs|sed 's/0x//g'|sed 's/ //g' + elif [ "$2" == "mr_signer" ]; then + tail -2 ../metadata_info_$1.txt |xargs|sed 's/0x//g'|sed 's/ //g' + fi +} + +function gen_secret_json() { + # First generate cert/key by openssl + ./generate_ssl_config.sh localhost + + # Then do base64 encode + ssl_config=$(base64 -w 0 ssl_configure/ssl.cfg) + image_key=$(base64 -w 0 image_key) + + # Then generate secret json + jq -n --arg ssl_config "$ssl_config" --arg image_key "$image_key" \ + '{"ssl_config": $ssl_config, "image_key": $image_key}' > secret_config.json +} + +function build_server_instance() +{ + gen_secret_json + rm -rf occlum_server && occlum new occlum_server + pushd occlum_server + + jq '.verify_mr_enclave = "on" | + .verify_mr_signer = "on" | + .verify_isv_prod_id = "off" | + .verify_isv_svn = "off" | + .verify_enclave_debuggable = "on" | + .sgx_mrs[0].mr_enclave = ''"'`get_mr tf mr_enclave`'" | + .sgx_mrs[0].mr_signer = ''"'`get_mr tf mr_signer`'" | + .sgx_mrs[0].debuggable = false ' ../ra_config_template.json > dynamic_config.json + + new_json="$(jq '.resource_limits.user_space_size = "500MB" | + .metadata.debuggable = false ' Occlum.json)" && \ + echo "${new_json}" > Occlum.json + + rm -rf image + copy_bom -f ../ra_server.yaml --root image --include-dir /opt/occlum/etc/template + + # Set GRPC_SERVER_DOMAIN to the hosts + # echo "$IP ${GRPC_SERVER_DOMAIN} " >> image/etc/hosts + + occlum build + occlum package occlum_instance + + popd +} + +build_ratls +build_tf_serving +build_init_ra + +build_tf_instance +build_server_instance diff --git a/example/client/cat.jpg b/example/client/cat.jpg new file mode 100644 index 00000000..945424f5 Binary files /dev/null and b/example/client/cat.jpg differ diff --git a/example/client/inception_client.py b/example/client/inception_client.py new file mode 100644 index 00000000..d8518f28 --- /dev/null +++ b/example/client/inception_client.py @@ -0,0 +1,44 @@ +from __future__ import print_function + +import grpc +import tensorflow as tf +import argparse + +from tensorflow_serving.apis import predict_pb2 +from tensorflow_serving.apis import prediction_service_pb2_grpc + + +def main(): + with open(args.crt, 'rb') as f: + creds = grpc.ssl_channel_credentials(f.read()) + channel = grpc.secure_channel(args.server, creds) + stub = prediction_service_pb2_grpc.PredictionServiceStub(channel) + # Send request + with open(args.image, 'rb') as f: + # See prediction_service.proto for gRPC request/response details. + request = predict_pb2.PredictRequest() + request.model_spec.name = 'INCEPTION' + request.model_spec.signature_name = 'predict_images' + + input_name = 'images' + input_shape = [1] + input_data = f.read() + request.inputs[input_name].CopyFrom( + tf.make_tensor_proto(input_data, shape=input_shape)) + + result = stub.Predict(request, 10.0) # 10 secs timeout + print(result) + + print("Inception Client Passed") + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument('--server', default='localhost:9000', + help='Tenforflow Model Server Address') + parser.add_argument('--crt', default=None, type=str, help='TLS certificate file path') + parser.add_argument('--image', default='Siberian_Husky_bi-eyed_Flickr.jpg', + help='Path to the image') + args = parser.parse_args() + + main() \ No newline at end of file diff --git a/example/client/requirements.txt b/example/client/requirements.txt new file mode 100644 index 00000000..2c217a72 --- /dev/null +++ b/example/client/requirements.txt @@ -0,0 +1,3 @@ +grpcio>=1.34.0 +tensorflow>=2.3.0 +tensorflow-serving-api>=2.3.0 diff --git a/example/container/Dockerfile_occlum_instance.ubuntu20.04 b/example/container/Dockerfile_occlum_instance.ubuntu20.04 new file mode 100644 index 00000000..a92a1dc7 --- /dev/null +++ b/example/container/Dockerfile_occlum_instance.ubuntu20.04 @@ -0,0 +1,30 @@ +FROM ubuntu:20.04 +LABEL maintainer="Qi Zheng " + +# Install SGX DCAP and Occlum runtime +ARG PSW_VERSION=2.15.101.1 +ARG DCAP_VERSION=1.12.101.1 +ENV APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=1 +RUN apt update && DEBIAN_FRONTEND="noninteractive" apt install -y --no-install-recommends gnupg wget ca-certificates jq && \ + echo 'deb [arch=amd64] https://download.01.org/intel-sgx/sgx_repo/ubuntu focal main' | tee /etc/apt/sources.list.d/intel-sgx.list && \ + wget -qO - https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | apt-key add - && \ + echo 'deb [arch=amd64] https://occlum.io/occlum-package-repos/debian focal main' | tee /etc/apt/sources.list.d/occlum.list && \ + wget -qO - https://occlum.io/occlum-package-repos/debian/public.key | apt-key add - && \ + apt update && \ + apt install -y libsgx-uae-service=$PSW_VERSION-focal1 && \ + apt install -y libsgx-dcap-ql=$DCAP_VERSION-focal1 && \ + apt install -y libsgx-dcap-default-qpl=$DCAP_VERSION-focal1 && \ + apt install -y occlum-runtime && \ + apt clean && \ + rm -rf /var/lib/apt/lists/* +ENV PATH="/opt/occlum/build/bin:/usr/local/occlum/bin:$PATH" + +# Users need build their own applications and generate occlum package first. +ARG OCCLUM_PACKAGE +ADD $OCCLUM_PACKAGE / +COPY container/docker-entrypoint.sh /usr/local/bin/ + +ENV PCCS_URL="https://localhost:8081/sgx/certification/v3/" + +ENTRYPOINT ["docker-entrypoint.sh"] +WORKDIR /occlum_instance diff --git a/example/container/build_image.sh b/example/container/build_image.sh new file mode 100755 index 00000000..1cb0c0b0 --- /dev/null +++ b/example/container/build_image.sh @@ -0,0 +1,57 @@ +#!/bin/bash + +scripts_dir=$(readlink -f $(dirname "${BASH_SOURCE[0]}")) +top_dir=$(dirname "${scripts_dir}") + +registry="$(whoami)" +tag="latest" + +function usage { + cat << EOM +usage: $(basename "$0") [OPTION]... + -i the occlum instance tar package after doing "occlum package" + -r the prefix string for registry + -n + -g container image tag + -h usage help +EOM + exit 0 +} + +function process_args { + while getopts ":i:r:n:g:h" option; do + case "${option}" in + i) package=${OPTARG};; + r) registry=${OPTARG};; + n) name=${OPTARG};; + g) tag=${OPTARG};; + h) usage;; + esac + done + + if [[ "${package}" == "" ]]; then + echo "Error: Please specify your occlum instance package via -i ." + exit 1 + fi + + if [[ "${name}" == "" ]]; then + echo "Error: Please specify your container image name via -n ." + exit 1 + fi +} + +function build_docker_occlum_image { + cd ${top_dir} + + echo "Build docker Occlum image based on ${package} ..." + sudo -E docker build \ + --network host \ + --build-arg http_proxy=$http_proxy \ + --build-arg https_proxy=$https_proxy \ + --build-arg OCCLUM_PACKAGE=${package} \ + -f container/Dockerfile_occlum_instance.ubuntu20.04 . \ + -t ${registry}/${name}:${tag} +} + +process_args "$@" +build_docker_occlum_image diff --git a/example/container/docker-entrypoint.sh b/example/container/docker-entrypoint.sh new file mode 100755 index 00000000..71b91946 --- /dev/null +++ b/example/container/docker-entrypoint.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +# Update PCCS_URL +line=$(grep -n "PCCS_URL" /etc/sgx_default_qcnl.conf | cut -d ":" -f 1) +sed -i "${line}c PCCS_URL=${PCCS_URL}" /etc/sgx_default_qcnl.conf + +exec "$@" diff --git a/example/dump_rootfs.sh b/example/dump_rootfs.sh new file mode 100755 index 00000000..4d8e6c14 --- /dev/null +++ b/example/dump_rootfs.sh @@ -0,0 +1,46 @@ +#!/bin/bash +set -e + +script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" + +tag="latest" +dest=${script_dir} + +function usage { + cat << EOM +Dump rootfs content from the specified container image. +usage: $(basename "$0") [OPTION]... + -i the container image name + -g container image tag + -d the directiory to put dumped rootfs + -h usage help +EOM + exit 0 +} + +function process_args { + while getopts ":i:g:d:h" option; do + case "${option}" in + i) container=${OPTARG};; + g) tag=${OPTARG};; + d) dest=${OPTARG};; + h) usage;; + esac + done + + if [[ "${container}" == "" ]]; then + echo "Error: Please specify the container image -i ." + exit 1 + fi +} + +process_args "$@" + +rm -rf rootfs.tar +docker export $(docker create --network host --name rootfs_dump ${container}:${tag}) -o rootfs.tar +docker rm rootfs_dump + +rm -rf ${dest}/rootfs && mkdir -p ${dest}/rootfs +tar xf rootfs.tar -C ${dest}/rootfs + +echo "Successfully dumped ${container}:${tag} rootfs to ${dest}/rootfs." diff --git a/example/generate_ssl_config.sh b/example/generate_ssl_config.sh new file mode 100755 index 00000000..9b264a7f --- /dev/null +++ b/example/generate_ssl_config.sh @@ -0,0 +1,25 @@ +service_domain_name=${1:-"localhost"} + +rm -rf ssl_configure +mkdir ssl_configure +cd ssl_configure + +# https://kubernetes.github.io/ingress-nginx/examples/PREREQUISITES/#client-certificate-authentication +openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout server.key -out server.crt -subj "/CN=${service_domain_name}" + +# Generate tls configure +## https://stackoverflow.com/questions/59199419/using-tensorflow-model-server-with-ssl-configuration + +echo "server_key: '`cat server.key | paste -d "" -s`'" >> ssl.cfg +echo "server_cert: '`cat server.crt | paste -d "" -s`'" >> ssl.cfg +echo "client_verify: false" >> ssl.cfg + +sed -i "s/-----BEGIN PRIVATE KEY-----/-----BEGIN PRIVATE KEY-----\\\n/g" ssl.cfg +sed -i "s/-----END PRIVATE KEY-----/\\\n-----END PRIVATE KEY-----/g" ssl.cfg +sed -i "s/-----BEGIN CERTIFICATE-----/-----BEGIN CERTIFICATE-----\\\n/g" ssl.cfg +sed -i "s/-----END CERTIFICATE-----/\\\n-----END CERTIFICATE-----/g" ssl.cfg + +echo "Generate server.key server.crt and ssl.cfg successfully!" +#cat ssl.cfg +cd - + diff --git a/example/init_ra/Cargo.lock b/example/init_ra/Cargo.lock new file mode 100644 index 00000000..95122653 --- /dev/null +++ b/example/init_ra/Cargo.lock @@ -0,0 +1,94 @@ +# This file is automatically @generated by Cargo. +# It is not intended for manual editing. +[[package]] +name = "init" +version = "0.0.1" +dependencies = [ + "libc", + "serde", + "serde_json", +] + +[[package]] +name = "itoa" +version = "0.4.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dd25036021b0de88a0aff6b850051563c6516d0bf53f8638938edbb9de732736" + +[[package]] +name = "libc" +version = "0.2.84" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1cca32fa0182e8c0989459524dc356b8f2b5c10f1b9eb521b7d182c03cf8c5ff" + +[[package]] +name = "proc-macro2" +version = "1.0.24" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e0704ee1a7e00d7bb417d0770ea303c1bccbabf0ef1667dae92b5967f5f8a71" +dependencies = [ + "unicode-xid", +] + +[[package]] +name = "quote" +version = "1.0.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c3d0b9745dc2debf507c8422de05d7226cc1f0644216dfdfead988f9b1ab32a7" +dependencies = [ + "proc-macro2", +] + +[[package]] +name = "ryu" +version = "1.0.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "71d301d4193d031abdd79ff7e3dd721168a9572ef3fe51a1517aba235bd8f86e" + +[[package]] +name = "serde" +version = "1.0.123" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "92d5161132722baa40d802cc70b15262b98258453e85e5d1d365c757c73869ae" +dependencies = [ + "serde_derive", +] + +[[package]] +name = "serde_derive" +version = "1.0.123" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9391c295d64fc0abb2c556bad848f33cb8296276b1ad2677d1ae1ace4f258f31" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "serde_json" +version = "1.0.62" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ea1c6153794552ea7cf7cf63b1231a25de00ec90db326ba6264440fa08e31486" +dependencies = [ + "itoa", + "ryu", + "serde", +] + +[[package]] +name = "syn" +version = "1.0.60" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c700597eca8a5a762beb35753ef6b94df201c81cca676604f547495a0d7f0081" +dependencies = [ + "proc-macro2", + "quote", + "unicode-xid", +] + +[[package]] +name = "unicode-xid" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f7fe0bb3479651439c9112f72b6c505038574c9fbb575ed1bf3b797fa39dd564" diff --git a/example/init_ra/Cargo.toml b/example/init_ra/Cargo.toml new file mode 100644 index 00000000..fae53884 --- /dev/null +++ b/example/init_ra/Cargo.toml @@ -0,0 +1,11 @@ +[package] +name = "init" +version = "0.0.1" +build = "build.rs" +authors = ["LI Qing geding.lq@antgroup.com"] +edition = "2018" + +[dependencies] +libc = "0.2.84" +serde = { version = "1.0", features = ["derive"] } +serde_json = "1.0" diff --git a/example/init_ra/build.rs b/example/init_ra/build.rs new file mode 100644 index 00000000..8a70a4cd --- /dev/null +++ b/example/init_ra/build.rs @@ -0,0 +1,5 @@ +fn main() { + println!("cargo:rustc-link-search=native=../dep_libs"); + println!("cargo:rustc-link-lib=dylib=grpc_ratls_client"); + println!("cargo:rustc-link-lib=dylib=hw_grpc_proto"); +} \ No newline at end of file diff --git a/example/init_ra/src/main.rs b/example/init_ra/src/main.rs new file mode 100644 index 00000000..c02aa149 --- /dev/null +++ b/example/init_ra/src/main.rs @@ -0,0 +1,159 @@ +extern crate libc; +extern crate serde; +extern crate serde_json; + +use libc::syscall; +use serde::Deserialize; + +use std::env; +use std::error::Error; +use std::fs; +use std::fs::File; +use std::io::{ErrorKind, Read}; + +use std::ffi::CString; +use std::os::raw::{c_int, c_char}; + +#[link(name = "grpc_ratls_client")] +extern "C" { + fn grpc_ratls_get_secret( + server_addr: *const c_char, // grpc server address+port, such as "localhost:50051" + config_json: *const c_char, // ratls handshake config json file + name: *const c_char, // secret name to be requested + secret_file: *const c_char // secret file to be saved + ) -> c_int; +} + +fn main() -> Result<(), Box> { + // Load the configuration from initfs + const IMAGE_CONFIG_FILE: &str = "/etc/image_config.json"; + let image_config = load_config(IMAGE_CONFIG_FILE)?; + + // Get the MAC of Occlum.json.protected file + let occlum_json_mac = { + let mut mac: sgx_aes_gcm_128bit_tag_t = Default::default(); + parse_str_to_bytes(&image_config.occlum_json_mac, &mut mac)?; + mac + }; + let occlum_json_mac_ptr = &occlum_json_mac as *const sgx_aes_gcm_128bit_tag_t; + + // Get grpc server address from environment GRPC_SERVER + let server_addr = CString::new( + env::var("GRPC_SERVER").unwrap_or("localhost:50051".to_string())) + .unwrap(); + let config_json = CString::new("dynamic_config.json").unwrap(); + + // Get the key of FS image if needed + let key = match &image_config.image_type[..] { + "encrypted" => { + // Get the image encrypted key through RA + let secret = CString::new("image_key").unwrap(); + let filename = CString::new("/etc/image_key").unwrap(); + + let ret = unsafe { + grpc_ratls_get_secret( + server_addr.as_ptr(), + config_json.as_ptr(), + secret.as_ptr(), + filename.as_ptr()) + }; + + if ret != 0 { + println!("grpc_ratls_get_secret failed return {}", ret); + return Err(Box::new(std::io::Error::last_os_error())); + } + + const IMAGE_KEY_FILE: &str = "/etc/image_key"; + let key_str = load_key(IMAGE_KEY_FILE)?; + let mut key: sgx_key_128bit_t = Default::default(); + parse_str_to_bytes(&key_str, &mut key)?; + Some(key) + } + "integrity-only" => None, + _ => unreachable!(), + }; + let key_ptr = key + .as_ref() + .map(|key| key as *const sgx_key_128bit_t) + .unwrap_or(std::ptr::null()); + + // Get certificate + let secret = CString::new("ssl_config").unwrap(); + let filename = CString::new("ssl_file").unwrap(); + + let ret = unsafe { + grpc_ratls_get_secret( + server_addr.as_ptr(), + config_json.as_ptr(), + secret.as_ptr(), + filename.as_ptr()) + }; + + if ret != 0 { + println!("grpc_ratls_get_secret failed return {}", ret); + return Err(Box::new(std::io::Error::last_os_error())); + } + + let ssl_secret = fs::read_to_string(filename.into_string().unwrap()) + .expect("Something went wrong reading the file"); + + // Mount the image + const SYS_MOUNT_FS: i64 = 363; + let ret = unsafe { syscall(SYS_MOUNT_FS, key_ptr, occlum_json_mac_ptr) }; + if ret < 0 { + return Err(Box::new(std::io::Error::last_os_error())); + } + + // Write the secrets to rootfs + fs::write("/etc/tf_ssl.cfg", ssl_secret.into_bytes())?; + + Ok(()) +} + +#[allow(non_camel_case_types)] +type sgx_key_128bit_t = [u8; 16]; +#[allow(non_camel_case_types)] +type sgx_aes_gcm_128bit_tag_t = [u8; 16]; + +#[derive(Deserialize, Debug)] +#[serde(deny_unknown_fields)] +struct ImageConfig { + occlum_json_mac: String, + image_type: String, +} + +fn load_config(config_path: &str) -> Result> { + let mut config_file = File::open(config_path)?; + let config_json = { + let mut config_json = String::new(); + config_file.read_to_string(&mut config_json)?; + config_json + }; + let config: ImageConfig = serde_json::from_str(&config_json)?; + Ok(config) +} + +fn load_key(key_path: &str) -> Result> { + let mut key_file = File::open(key_path)?; + let mut key = String::new(); + key_file.read_to_string(&mut key)?; + Ok(key.trim_end_matches(|c| c == '\r' || c == '\n').to_string()) +} + +fn parse_str_to_bytes(arg_str: &str, bytes: &mut [u8]) -> Result<(), Box> { + let bytes_str_vec = { + let bytes_str_vec: Vec<&str> = arg_str.split('-').collect(); + if bytes_str_vec.len() != bytes.len() { + return Err(Box::new(std::io::Error::new( + ErrorKind::InvalidData, + "The length or format of Key/MAC string is invalid", + ))); + } + bytes_str_vec + }; + + for (byte_i, byte_str) in bytes_str_vec.iter().enumerate() { + bytes[byte_i] = u8::from_str_radix(byte_str, 16)?; + } + Ok(()) +} diff --git a/example/init_ra_client.yaml b/example/init_ra_client.yaml new file mode 100644 index 00000000..300599d1 --- /dev/null +++ b/example/init_ra_client.yaml @@ -0,0 +1,19 @@ +includes: + - base.yaml +targets: + - target: /bin/ + copy: + - files: + - ${INITRA_DIR}/target/x86_64-unknown-linux-musl/release/init + - target: /lib/ + copy: + - files: + - ${DEP_LIBS_DIR}/libgrpc_ratls_client.so + - target: / + copy: + - files: + - dynamic_config.json + - target: /usr/share/grpc/ + copy: + - files: + - ${RATLS_DIR}/grpc-src/etc/roots.pem diff --git a/example/overview.png b/example/overview.png new file mode 100644 index 00000000..66349dea Binary files /dev/null and b/example/overview.png differ diff --git a/example/ra_config_template.json b/example/ra_config_template.json new file mode 100644 index 00000000..573c86ff --- /dev/null +++ b/example/ra_config_template.json @@ -0,0 +1,16 @@ +{ + "verify_mr_enclave" : "on", + "verify_mr_signer" : "on", + "verify_isv_prod_id" : "on", + "verify_isv_svn" : "on", + "verify_enclave_debuggable" : "on", + "sgx_mrs": [ + { + "mr_enclave" : "", + "mr_signer" : "", + "isv_prod_id" : "0", + "isv_svn" : "0", + "debuggable" : false + } + ] +} diff --git a/example/ra_server.yaml b/example/ra_server.yaml new file mode 100644 index 00000000..d7225594 --- /dev/null +++ b/example/ra_server.yaml @@ -0,0 +1,16 @@ +includes: + - base.yaml +targets: + - target: /bin/ + copy: + - files: + - ${RATLS_DIR}/grpc-src/examples/cpp/ratls/build/server + - target: / + copy: + - files: + - dynamic_config.json + - ../secret_config.json + - target: /usr/share/grpc/ + copy: + - files: + - ${RATLS_DIR}/grpc-src/etc/roots.pem diff --git a/example/run.sh b/example/run.sh new file mode 100755 index 00000000..43d0d975 --- /dev/null +++ b/example/run.sh @@ -0,0 +1,22 @@ +#!/bin/bash +set -e + +GRPC_SERVER_DOMAIN=${1:-localhost} +GRPC_SERVER_PORT=${2:-50051} +GRPC_SERVER="${GRPC_SERVER_DOMAIN}:${GRPC_SERVER_PORT}" + +echo "Start GRPC server on backgound ..." + +pushd occlum_server +occlum run /bin/server ${GRPC_SERVER} & +popd + +sleep 3 + +echo "Start Tensorflow-Serving on backgound ..." + +pushd occlum_tf +taskset -c 0,1 occlum run /bin/tensorflow_model_server \ + --model_name=INCEPTION --model_base_path=/model/INCEPTION/INCEPTION \ + --port=9000 --ssl_config_file="/etc/tf_ssl.cfg" +popd diff --git a/example/run_container.sh b/example/run_container.sh new file mode 100755 index 00000000..55c01d96 --- /dev/null +++ b/example/run_container.sh @@ -0,0 +1,56 @@ +#!/bin/bash +set -e + +script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" + +grpc_domain=localhost +grpc_port=50051 +pccs_url="https://localhost:8081/sgx/certification/v3/" +registry="demo" + +function usage { + cat << EOM +Run container images init_ra_server and tf_demo on background. +usage: $(basename "$0") [OPTION]... + -s default localhost. + -p default 50051. + -u default https://localhost:8081/sgx/certification/v3/. + -r the registry for this demo container images. + -h usage help +EOM + exit 0 +} + +function process_args { + while getopts ":s:p:u:r:h" option; do + case "${option}" in + s) grpc_domain=${OPTARG};; + p) grpc_port=${OPTARG};; + u) pccs_url=${OPTARG};; + r) registry=${OPTARG};; + h) usage;; + esac + done +} + +process_args "$@" + +echo "Start GRPC server on backgound ..." + +docker run --network host \ + --device /dev/sgx/enclave --device /dev/sgx/provision \ + --env PCCS_URL=${pccs_url} \ + ${registry}/init_ra_server \ + occlum run /bin/server ${grpc_domain}:${grpc_port} & + +sleep 3 + +echo "Start Tensorflow-Serving on backgound ..." + +docker run --network host \ + --device /dev/sgx/enclave --device /dev/sgx/provision \ + --env PCCS_URL=${pccs_url} \ + ${registry}/tf_demo \ + taskset -c 0,1 occlum run /bin/tensorflow_model_server \ + --model_name=INCEPTION --model_base_path=/model/INCEPTION/INCEPTION \ + --port=9000 --ssl_config_file="/etc/tf_ssl.cfg" & diff --git a/example/tf_serving.yaml b/example/tf_serving.yaml new file mode 100644 index 00000000..dc9a2646 --- /dev/null +++ b/example/tf_serving.yaml @@ -0,0 +1,13 @@ +includes: + - base.yaml +targets: + # copy model + - target: /model + copy: + - dirs: + - ${TF_DIR}/INCEPTION + - target: /bin + copy: + - files: + - ${TF_DIR}/rootfs/usr/bin/tensorflow_model_server +