From 8a8cacf349be7f43fa0621be40d9c74b593984ca Mon Sep 17 00:00:00 2001 From: jddocs Date: Mon, 30 Jun 2025 14:36:27 -0400 Subject: [PATCH 1/8] [New] Build an AI Inferencing Solution With TensorRt and PyTorch --- .../index.md | 248 ++++++++++++++++++ 1 file changed, 248 insertions(+) create mode 100644 docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md new file mode 100644 index 00000000000..c233e4fd1c7 --- /dev/null +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md @@ -0,0 +1,248 @@ +--- +slug: ai-inferencing-with-tensorrt-and-pytorch +title: "Build an AI Inferencing Solution With TensorRt and PyTorch" +description: "Enhance deep learning capabilities with TensorRT and PyTorch on Akamai Cloud. Optimize inferencing for various AI models using NVIDIA RTX 4000 Ada GPU instances." +authors: ["Akamai"] +contributors: ["Akamai"] +published: 2025-06-27 +keywords: ['ai','inference','inferencing','llm','model','pytorch','tensorrt','gpu','nvidia'] +license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' +external_resources: +- '[Link Title 1](http://www.example.com)' +- '[Link Title 2](http://www.example.net)' +--- + +AI inference workloads are increasingly demanding, requiring low latency, high throughput, and cost-efficiency at scale. Whether working with computer vision or natural language AI models, processing power and efficiency are key; inference workloads must be able to handle real-time predictions while maintaining optimal resource utilization. Choosing the right infrastructure and optimization tools can dramatically impact both performance and operational costs. + +This guide shows how to build and benchmark a complete AI inferencing solution using TensorRT and PyTorch on Akamai Cloud's NVIDIA RTX 4000 Ada GPU instances. NVIDIA RTX 4000 Ada GPU instances are available across global core compute regions, delivering the specialized hardware required for heavy AI workloads. Using the steps in this guide, you can: + +- Deploy an RTX 4000 Ada GPU instance using Akamai Cloud infrastructure +- Run an AI inference workload using PyTorch +- Optimize your model with TensorRT for performance gains +- Measure latency and throughput + +The primary AI model used in this guide is a ResNet50 computer vision (CV) model. However, the techniques used can be applied to other model architectures like object detection ([YOLO](https://en.wikipedia.org/wiki/You_Only_Look_Once); You Only Look Once) models, speech recognition systems (OpenAI's [Whisper](https://openai.com/index/whisper/)), and large language models (LLMs) like [ChatGPT](https://openai.com/index/chatgpt/), [Llama](https://www.llama.com/), or [Claude](https://www.anthropic.com/claude). + +## What are TensorRT and PyTorch? + +### TensorRt + + + +### PyTorch + + + +## Before You Begin + +The following prerequisites are recommended before starting the implementation steps in this tutorial: + +- An Akamai Cloud account with the ability to deploy GPU instances +- The [Linode CLI](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-the-linode-cli) configured with proper permissions +- An understanding of Python virtual environments and package management +- General familiarity of deep learning concepts and models + +{{< note >}} +This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see our [Users and Groups](https://www.linode.com/docs/guides/linux-users-and-groups/) doc. +{{< /note >}} + +## Deploy an NVIDIA RTX 4000 Ada Instance + +Akamai's NVIDIA RTX 4000 Ada GPU instances can be deployed using Cloud Manager or the Linode CLI. + +### Deploy Using Cloud Manager + + +### Deploy Using the Linode CLI + + + +## Set Up Your Development Environment + +Once it is fully deployed, connect to your GPU instance to update system packages and install system dependencies. + +### Update Packages + +1. Log into your instance via SSH: + + ```command + ssh user@{{< placeholder "IP_ADDRESS" >}} + ``` + +1. Update your system and install build tools and system dependencies: + + ```command + sudo apt update && sudo apt install -y \ + build-essential \ + gcc \ + wget \ + gnupg \ + software-properties-common \ + python3-pip \ + python3-venv + ``` + +1. Download and install NVIDIA CUDA keyring so you get the latest stable drivers and toolkits: + + ```command + wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb + sudo dpkg -i cuda-keyring_1.1-1_all.deb + ``` + +1. Update system packages after the keyring is installed: + + ```command + sudo apt update + ``` + +### Install NVIDIA Drivers and CUDA Toolkit + +1. Install the NVIDIA driver repository along with the latest drivers compatible with the RTX 4000 Ada card: + + ```command + sudo apt install -y cuda + ``` + +1. Reboot your instance to complete installation of the driver: + + ```command + sudo reboot + ``` + +1. After the reboot is complete, log back into your instance: + + ```command + ssh user@{{< placeholder "IP_ADDRESS" >}} + ``` + +1. Use the following command to verify successful driver installation: + + ```command + nvidia-smi + ``` + + You should see basic information about your RTX 4000 Ada instance and its driver version: + + ```output + + ``` + +## Configure Your Python Environment + +Set up and use a Python Virtual Environment (venv) so that you can isolate Python packages and prevent conflicts with system-wide packages and across projects. + +### Create the Virtual Environment + +1. Using the python3-venv package downloaded during setup, set up the Python Virtual Environment: + + ```command + python3 -m venv ~/venv + source ~/venv/bin/activate + ``` + +1. Upgrade pip to the latest version to complete the setup: + + ```command + pip install --upgrade pip + ``` + +### Install PyTorch and TensorRT + +1. While using your virtual environment, install PyTorch, TensorRT, and dependencies. These are the primary AI libraries needed to run your inference workloads. + + ```command + pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 + pip install requests + pip install nvidia-pyindex + pip install nvidia-tensorrt + pip install torch-tensorrt -U + ``` + +## Create a Benchmark Using the ResNet50 Inference Model + +Create and run a Python script using a pre-trained ResNet50 computer vision model. Running this script tests to make sure the environment is configured correctly while providing a way to evaluate GPU performance using a real-world example. This example script is a foundation that can be adapted for other inference model architectures. + +1. Using a text editor such as nano, create the Python script file. Replace {{< placeholder "inference_test.py" >}} with a script tile name of your choosing: + + ```command + nano {{< placeholder "inference_test.py" >}} + ``` + +1. Copy and insert the following code content into the script. Note the commented descriptions for what each section of code performs: + + ```file {title="inference_test.py"} + # import PyTorch, pre-trained models from torchvision and image utilities + + import torch + import torchvision.models as models + import torchvision.transforms as transforms + from PIL import Image + import requests + from io import BytesIO + import time + + # Download a sample image of a dog + # You could replace this with a local file or different URL + + img_url = "https://github.com/pytorch/hub/raw/master/images/dog.jpg" + image = Image.open(BytesIO(requests.get(img_url).content)) + + # Preprocess + # Resize and crop to match ResNet50’s input size + # ResNet50 is trained on ImageNet where inputs are 224sx224 RGB + # Convert to a tensor array so PyTorch can understand it + # Use unsqueeze(0) to add a batch dimension, tricks model to think we are sending a batch of # images + # Use cuda() to move the data to the GPU + + transform = transforms.Compose([ + transforms.Resize(256), + transforms.CenterCrop(224), + transforms.ToTensor(), + ]) + input_tensor = transform(image).unsqueeze(0).cuda() + + # Load a model (ResNet50) pretrained on the ImageNet dataset containing millions of images + + model = models.resnet50(pretrained=True).cuda().eval() + + # Warm-up the GPU + # Allows the GPU to optimize the necessary kernels prior to running the benchmark + + for _ in range(5): + _ = model(input_tensor) + + # Benchmark Inference Time using an average time across 20 inference runs + + start = time.time() + with torch.no_grad(): + for _ in range(20): + _ = model(input_tensor) + end = time.time() + + print(f"Average inference time: {(end - start) / 20:.4f} seconds") + ``` + + When complete, press Ctrl + X to exit nano, Y to save, and Enter to confirm. + +1. Run the Python script: + + ```command + python inference_test.py + ``` + + If everything works correctly, you should see output similar to the below. Time results may vary: + + ```output + Average inference time: 0.0025 seconds + ``` + + We recommend timing how long it takes to run the model 20 times, and then divide by 20 to get the average time per inference. This should give you an idea of how quickly your GPU can process input using this model. + +## Next Steps + +Try switching out ResNet50 for different model architectures available in torchvision.models, such as: + +- `efficientnet_b0`: Lightweight and accurate +- `vit_b_16`: Vision Transformer model for experimenting with newer architectures + +This can help you see how model complexity affects speed and accuracy. \ No newline at end of file From 503035d08743f08fabb04db2a0d8c7afb03922d5 Mon Sep 17 00:00:00 2001 From: jddocs Date: Mon, 30 Jun 2025 16:43:48 -0400 Subject: [PATCH 2/8] format edits --- .../index.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md index c233e4fd1c7..6cd91afb5ae 100644 --- a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md @@ -148,17 +148,17 @@ Set up and use a Python Virtual Environment (venv) so that you can isolate Pytho ### Install PyTorch and TensorRT -1. While using your virtual environment, install PyTorch, TensorRT, and dependencies. These are the primary AI libraries needed to run your inference workloads. +While using your virtual environment, install PyTorch, TensorRT, and dependencies. These are the primary AI libraries needed to run your inference workloads. - ```command - pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 - pip install requests - pip install nvidia-pyindex - pip install nvidia-tensorrt - pip install torch-tensorrt -U - ``` +```command +pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 +pip install requests +pip install nvidia-pyindex +pip install nvidia-tensorrt +pip install torch-tensorrt -U +``` -## Create a Benchmark Using the ResNet50 Inference Model +## Test and Benchmark the ResNet50 Inference Model Create and run a Python script using a pre-trained ResNet50 computer vision model. Running this script tests to make sure the environment is configured correctly while providing a way to evaluate GPU performance using a real-world example. This example script is a foundation that can be adapted for other inference model architectures. @@ -236,7 +236,7 @@ Create and run a Python script using a pre-trained ResNet50 computer vision mode Average inference time: 0.0025 seconds ``` - We recommend timing how long it takes to run the model 20 times, and then divide by 20 to get the average time per inference. This should give you an idea of how quickly your GPU can process input using this model. + It is recommended to time how long it takes to run the model 20 times, and then divide by 20 to get the average time per inference. This should give you an idea of how quickly your GPU can process input using this model. ## Next Steps From 76f058ec7d3b81fb30b6d3ed6b36f2334bf681df Mon Sep 17 00:00:00 2001 From: jddocs Date: Tue, 1 Jul 2025 16:44:46 -0400 Subject: [PATCH 3/8] added descriptions for tensorrt and pytorch --- .../index.md | 50 ++++++++++++++----- 1 file changed, 37 insertions(+), 13 deletions(-) diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md index 6cd91afb5ae..6cb4c724d72 100644 --- a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md @@ -27,11 +27,11 @@ The primary AI model used in this guide is a ResNet50 computer vision (CV) model ### TensorRt - +[TensorRT](https://developer.nvidia.com/tensorrt) is an API and tool ecosystem by NVIDIA that includes inference compilers, runtimes, and deep learning model optimizations. TensorRT is trained on all major frameworks and is used to improve performance on NVIDIA GPUs using techniques like kernel auto-tuning, dynamic tensor memory management, and multi-stream execution. It directly integrates with PyTorch using the TensorRT Framework Integrations API to achieve up to 6x faster inferencing. ### PyTorch - +[PyTorch](https://pytorch.org/) is an open-source machine learning framework based on the [Torch library](https://docs.pytorch.org/docs/stable/library.html) and developed by Meta AI for training deep learning models. PyTorch is written in Python and integrates with TensorRT through [Torch-TensorRT](https://github.com/pytorch/TensorRT), so developers can optimize PyTorch models without changing existing codebases. PyTorch integrates with [CUDA](https://en.wikipedia.org/wiki/CUDA) (Compute Unified Device Architecture) to take advantage of parallel computing architectures found in NVIDIA GPUs. ## Before You Begin @@ -48,7 +48,7 @@ This guide is written for a non-root user. Commands that require elevated privil ## Deploy an NVIDIA RTX 4000 Ada Instance -Akamai's NVIDIA RTX 4000 Ada GPU instances can be deployed using Cloud Manager or the Linode CLI. +Akamai's NVIDIA RTX 4000 Ada GPU instances can be deployed using Cloud Manager or the Linode CLI. This guide is written for use with the Ubuntu 24.04 LTS distribution. ### Deploy Using Cloud Manager @@ -59,14 +59,14 @@ Akamai's NVIDIA RTX 4000 Ada GPU instances can be deployed using Cloud Manager o ## Set Up Your Development Environment -Once it is fully deployed, connect to your GPU instance to update system packages and install system dependencies. +Once it is fully deployed, connect to your GPU instance to update system packages and install system dependencies. It is recommended to follow the steps in our [Set up and secure a Linode](https://techdocs.akamai.com/cloud-computing/docs/set-up-and-secure-a-compute-instance) guide to configure a limited user with sudo access and secure your sever. ### Update Packages -1. Log into your instance via SSH: +1. Log into your instance via SSH. Replace {{< placeholder "user" >}} with your sudo username and {{< placeholder "IP_ADDRESS" >}} with your Linode instance's IP address: ```command - ssh user@{{< placeholder "IP_ADDRESS" >}} + ssh {{< placeholder "user" >}}@{{< placeholder "IP_ADDRESS" >}} ``` 1. Update your system and install build tools and system dependencies: @@ -112,7 +112,7 @@ Once it is fully deployed, connect to your GPU instance to update system package 1. After the reboot is complete, log back into your instance: ```command - ssh user@{{< placeholder "IP_ADDRESS" >}} + ssh {{< placeholder "user" >}}@{{< placeholder "IP_ADDRESS" >}} ``` 1. Use the following command to verify successful driver installation: @@ -121,10 +121,28 @@ Once it is fully deployed, connect to your GPU instance to update system package nvidia-smi ``` - You should see basic information about your RTX 4000 Ada instance and its driver version: + This displays basic information about your RTX 4000 Ada instance and its driver version. Your driver and software versions may vary based on release date: ```output - + +-----------------------------------------------------------------------------------------+ + | NVIDIA-SMI 575.57.08 Driver Version: 575.57.08 CUDA Version: 12.9 | + |-----------------------------------------+------------------------+----------------------+ + | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | + | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | + | | | MIG M. | + |=========================================+========================+======================| + | 0 NVIDIA RTX 4000 Ada Gene... On | 00000000:00:02.0 Off | Off | + | 30% 35C P8 4W / 130W | 2MiB / 20475MiB | 0% Default | + | | | N/A | + +-----------------------------------------+------------------------+----------------------+ + + +-----------------------------------------------------------------------------------------+ + | Processes: | + | GPU GI CI PID Type Process name GPU Memory | + | ID ID Usage | + |=========================================================================================| + | No running processes found | + +-----------------------------------------------------------------------------------------+ ``` ## Configure Your Python Environment @@ -140,17 +158,23 @@ Set up and use a Python Virtual Environment (venv) so that you can isolate Pytho source ~/venv/bin/activate ``` -1. Upgrade pip to the latest version to complete the setup: + You can confirm you are using your virtual environment when you see `(venv)` at the beginning of your command prompt: - ```command + ```output + (venv) user@hostname + ``` + +1. While in your virtual environment, upgrade pip to the latest version to complete the setup: + + ```command {title="(venv)"} pip install --upgrade pip ``` ### Install PyTorch and TensorRT -While using your virtual environment, install PyTorch, TensorRT, and dependencies. These are the primary AI libraries needed to run your inference workloads. +Remain in your virtual environment to install PyTorch, TensorRT, and dependencies. These are the primary AI libraries needed to run your inference workloads. -```command +```command {title="(venv)"} pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 pip install requests pip install nvidia-pyindex From 411b70e09183dd6a5b7c914e8985c0e320500d4b Mon Sep 17 00:00:00 2001 From: jddocs Date: Wed, 2 Jul 2025 12:14:42 -0400 Subject: [PATCH 4/8] copy edit and add architecture diagram v1 --- .../PyTorch-TensorRT-Diagram.png | Bin 0 -> 38483 bytes .../PyTorch-TensorRT-Diagram.svg | 1 + .../index.md | 21 ++++++++++++------ 3 files changed, 15 insertions(+), 7 deletions(-) create mode 100644 docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.png create mode 100644 docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.svg diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.png b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.png new file mode 100644 index 0000000000000000000000000000000000000000..9eb79957521086a2e599426d68abf6e366983249 GIT binary patch literal 38483 zcmeFYWk6Kj+c!FhNC~1yC;|dXhX@Sa3QBi}^w8a@l(a~9cQe4yASpEr%`kL#cb#E|>6tnHy3g=ubhN2q*9J~QR?Sl~ zBQ)Bgm$XH{go!crL?@HVF_ZjA-v*f^^Y6HPIX#o0c`k=>8W#^i~M<{-^?yKXgTpL@dd~F#B8?Npt0UZonLmh~ zefTL3(oY6C~$7(HwJQRR4PxYY4-buRW zM>Ptwc6EQNRwZU9J3?ET5+SjsX{lvyT#LO?@^N#Tsk7b?z1fXzs>|yi!wM8al69|! zUbUJ$=o`at=(Ur(oAAcr5qFaliAL$@Ee%hTo?oGG?e5SQWyIKKx#%kr1^cp>iJ{3c zUxj?#NAvv3YR2(>zA2-E_ zUyJ4I21$}PszT^WgjDtERxI8pGqr=hLh0cNsV$W6y%*rqt|oR_gUd{Ha^B9 zRSfAW4}hszVTZWQ7K|L396mqp*XzTQsnv%lNlsbXe(7Ynagn>fynRsKmKRNBW6FZnJQiW|OUNjU39Uj$nI zUQDz!d`BR>p+50JL=@$+XT2&M;h3a+k&u(~in{OgxQ#XNqo8IP_D^|1;z%#++QMcz zX-z>9O-==8+3ZrQ>>3@va569N+SMSCbRWUgZo_fN;F=a4J;#v>39|?)Eyh1Wd9A^S zfn+Auto$|Rn-Oe|6Ds8s%HtDC#}leCSti&qJSs9?J=vU@KUV!5F|-{q0lH&LdjXco z$ImE}Rp%B_)jbaguF-LI%l$GVr1o1FoBXFjOT7}mjkVa1+_!@k3;uL_03po#t<)Mf z>5rL8iaQInCrlMgRv&gW>B5f=Z67IwylkLi+Z0>h5+ep-`mY8oY+kjEoduY8U=O>o$`+|Zt=MHxI(cS> zLoO1iL!^nV!x)ADIu9Pp!~es8J)Fst&j@Z`*CpUynW~X*MVw(w#hTCeB4ySxZWW|b z0R@FO{5o0S84w+cF7Y>H)Y8VC2%^xGwY>;hKaPtv6V!?uvN9K&NC_arR2QT%bez;X z_mb=qH?L-jElf>FJN@QYs8@S>iy@t)TKdv`lQdReS)NF zkVV3i_T$csa^YCq;USaCtQ9^@Gb+H^|K2w@i(D^TroM2<7--%Xu?jJoa__Y5KcTF4 z7T-Wx)T~-;4f}hRc0(OoZphB-G2b>rr^_b`B7aMXgon6qd^t8*g902?51Ch!u$X{F zDPTDF>DcOg!B6}pxJv`v3=(StOsP2jRr*cX&iE`PV1h6YGM6n7FK@U|LfnNiKlZRE zOn%}EAhxd~`+r+=nvsPnj1lDyr9YKM(YX6`yrTQeB%3hd83?->k?sZ3x#VwKF^5 z>y^mf4Z%r7y=)uJ92R!3mW59IJi{Kg+i92INGlnG@}{`gm5&;NSJemd?1E!;dP&?V z8@9Y|RCz}&IdfW?1a3-jHufwe?(`dSk9I?!i?e017}FRNE=OxHHO}-Ny%{7>FTP?& zGiNa_z?Nz&@dM0PKj5M|>AygDZDo0jevlX-vr%W|w_9+r0gkdi&WWwXhLe(AA92LX(O7mZ_X1o8fZKEZPbE>I0QuVsz3=$Gy&1%g0MUfDulGc~b z%vpoY$K+1g{@e0rf|=y*^hamQ(;In5#>vd*{M^?4LDMKf+!^kbf>VXo^%6IrlKaMhzegL4vk$QI?~ZBEg&ke#LH+Fq`c`1n+x&}1F`ue82G~*ey@NXfE&AHGf@q_8aU9 zY%l%SUb(Xsq8(`fnvffSK zr&&;oV;2-&+R@1K9@1&YPtrJK6?OW>IC9L2fVvrfzbz0Q>*&joH}|DTK9eEnAfxfa zxAv|hWB_}jJX^Jt$e)<3U$nx)`3Nxf>fQutQ*7!ct-$uYV`zKD!|X4u@J7cXM#sZO zk-?WESR$lOO$`Jd;ER2QCtAcV%OaKLOwEamF$}Ljw!V^sszGU!Z4wu#+fHnQy8SF) zdzjcY7;7Kl{Nrkb;2m$x*O>T-+?k6m6skXXQE6$iDA0I(k*IApEjEiPto^yxerm)= z#x$*UK9C!lmGBujJ78}#>8j^a6$ULrrm7U1|0y> zj3ychT}}x_!RdV{{f?KmEm`*&!pMbI;zrKTZ1doaXh8zL6-Ko67CBz`_esmP9GfXB zh*iBA-2VcFln@yu*OgJG@g6XEg9d^^6S%GVDcYm3*>O4scS}hM%7_X|$RiIfcy22C zf4@oESxc$+{%oofDm@pi@vRb@6TAKw`VKO*799tO(m?kFJa`uN6a3muI?M8$$z^z) zzbad>$3Dxe!Jf$6NfuzBVcmFCGGou>dRLJhTKvZP+C{`o9-b3RTKJ@X1^@#8*N1RE2X#Nh|pJVm6zX*DolIDXIGj!uNZkeazGr#n)rp&K;bs*ybU zmc5Axt2amxOKR^50b5SL3l6>L>?#-{5!hs}aNJ*45qZ@vk(6F8&la1V@1QEK6oOhA zXEA*xy;o4;D_)KH`badb2UGkvH&JSN{fC3=&FIF#@%%gM)Xlojg(z2S(B;@~p;)wJ z4^X|vhT+cYuX({uKOCThlxTd(m*#O%pquTi7vh2^8uB9ugyYISsJEm*nh+G~^@^GkHd zxp7Ch=+~GTu-c4~X0fl+7I}~yCkE$e1dKDJp>fkLd}UJ-O^*8BPUP5Gh3s)>E!MZ$ab5&0l`;-zlgLVmh}GH@P#^@3!YPZiVMgZk9&ihZ zm1CHr%c~Vg*T}8+X}br;^tRQFCDUdu(J6;J6HuxL2(@)=SrLr=m+0w+4_g=HgiiAk z+K=$!O*}Xjp%XH}c*R4xl;vWXnbEO1ze;Vw5z)lACZ&@x)`ehNEb-!`&Xp?cKH`0r z=6cm#J{5*)&W>C93eRogNEpH|J_pHrs@*QQ48OyfZM^ey_@40NZ~+{r85mkZQH)c9 zkH>y~*7;hr^us5OX=AukA9lkJ=Fm!~zb}9+&9z(awcRgFs3dxUl$LcK=EEYV9~zXp z>7>AcUn2DG*1{THhc32(eZ%BIQr`N_g$Uf6p8gkl=UhIS6!)omTM%m$c+xVmCtz)n z-I2;a31cgWC}T@}U01hoDu=F;yLMpXVz2B=kU*Md|3NZdy5%8`@;g5T=7xNi`Rqw_ zogMm)6`y$?l<#F;{Kp5tCU}I>?MQw1W=aqg#qDAobz?x9- zh^Bv(f?B%*gWQXjaoNIY0acw=4BIw7$BbfGr-bhIotH~UbYrI`fqz+BcRh@%bKoF( z|E^Lz<}M;dzr1(FHN_b=V!QHP)8lyTvdRz2ns1{ro5=($ZI0($m0OLbN(glL{9YwV z4ozt2J`oUtPkz>dZkCrc{Jy)yn&FRZ`{w)Pd7|6%fPm-qd24y4^;)3Sy67Ljakmr+ zQ~H`O08EB*sR+0EQIff^(itwlO1lA{RvOJ4cC*z=;GZSre!f6^e1+yWeWwxENpc47 z?dFiXzV_6)&SG-^c_Nw$8gV~4Z9h&H17Fx2`k(Ivj$858I&EbPG~nhHov0Lb_Nsrh zJH$=pYq9J5<8S_Am1*vIoyApS46nh&4!nG8eM55dQPkJ-P+O=Ig>r-I;L!OF`Vxe6 zI>kl7@^AgGQp7JC!#|rTLkv#LbG3Kt3DMV!`V1yxhQ@nRe54Il*2lem?@7+cxg05Y zTjb7L?I;-RRg#p=|IRue9|sSh6Bf@BRC~BDA6gC-IdzoGHM#CpV4F@YDnt!fWk`9e z=O7x-{#II|6I$}sx~(lJm3+q+^m~r^I{G1(i4MaNoaaV&{)jl=;IVASQ1(0!ZQS~7 z2z)vrKpY8VqhlGgV5msUBmJUn&)V>}_nJO+K;JuyJ6bkek@@{<)y^^{v!P-C7cI#As2O@8Ly=osPMGPV=R>7AJ<+cI%>VjbZ8hR0;&HoHxMqFS6~Znl2rFOdMr@mRN8++xDFg|)&gA8hkN52X=<_mBxL9;py+D5r#4yFY^kLCwrURr4c$RCr zcJf#z)%59&mIB5CrmfwiN|~TT@(k&of6KYo6vpC|>aDUeJ8a)laFC#LUbwV2DmPSO z1w*w|?)da~QOg`l3Gmby-sa0J%{Qd&xvbe|RHObdzjRI?)oJ*8Wqm2dZ+oj?C2za7 z@EjsI_ovJjy~PLj&eI#lW_dMpBVr@1NBk8%<2-P7{M~*x3Y+)(Rkv@ML%}ON+VBsU z2j*|ozy}c}QxVgi?|O_OWCETC-2&e1pxKJta=!f2Gr|uR@7p?y~I4JZ{ zvo}mcfqVR3r@;Z^3d<+vj2(-JG#uCb&yYL!GIGzRlCS6o8nqwZUM)b}1jY~ZXf)0| zij~iJb@3FcZCDJ1_Pu9!n+=y{e<+N^!zIaGn@Cn%G;aSU@i*^qIG#sOk{PUVT)2F* z^wz_6DBPFXKlc=e>M9s{&6p5&dfN7Vrx0D!`7#zfa(A4}SZ&-7?Z%Nn`Z^ z&!aOOPdYca^7*)RIqocUzK`|7ncC)YZ1Kvt4m!gW-j2b4*=)y@iH#Dj7G^zted*Gd zimkHC$WExZzrz8%Of@F$u#lkR&cDl63!~^>D2H!fU!Tw1lAo9qZF1~=aE6~EJ!;e8 zNjZ-+E!e5{EJudF>r;yCm|bHI%rLRpW+zXZrg1K^_k1^t>hul2rkZ{)9peV8lh=$f zsreI!{GnRWk--3~X?Ojj+bE`HlJ`1FxP_#!W;l^=pp5p5YJK-ROzc!Qh_$N^uIu`V z)pt}AXgGq0$yhODPYgq~(!co4rP9`HA1C>TzsZ>vJ4%N zM48)_LezL>T&)%)ZGye$CpGDQd<|_knZvq{v8;27YJJY8y8QB`ozud&=wpx^@Sjp6 zI^={}G9sm3d@W}Je{~TCCib+hU0u@-z;=BPY>-a&{2^^<+w_vwt1@FIH9>XMz`?;c z&c6bW!~gNC7hK=EHbP8+;cWIbf1-%yuIx-~l`%?oKJrn1&H*Dz2xozISLo{hDmv|cYMRSg;8&fC*f>p1*k zET9Q*D5S~cX43n#^=k-W=iWu_mUZ=ObWG7tkAD0J=;x04ecLxW8NlLPK*5M!`H337(4$xe|E^u$stBl}B$64)zSVu6sMEm4vK+VX zeR4s!__B}C(rI^)ii!jyt#cqi>@oko+XRcd^e(TLSBow7WoX*nc!a@L(>Z>_wbV+x z3_dPR-ahWFN8)Kz8ui4>Ksv?q(wSOMrh_FVTC54H7wj&*FSJ!~O(5{?WVqREyv=b) z8qM9-7_;N+z|rRdcJFIzy!ZJB1DcP=U}BbT7ePcZ=4e&q-1ePtCCmMJbqGJhC)cU% zxTaiE!L(lPZ6L28oAJm}iF?o=o^`>JdM0@JTo)#oi=5wm!#5n?>>?0@uF-aWer{1g z`IRlwqY&|&&c^YK;il)|3oPe23EBAOW()|DAG>U&R{KTL|3O97s+MY8h(CK(`mLM14Oa`>`?58zp31tA6BFyfar?lYS0vk!#)M$V4(Yy{FD=oHKp2GM|X@-SN@XSqgC$ z(D2A<7F2ae2p$RRJeu^3}_V z>IEckjZtV{J}`oKGH)5PH>8))RHETPF>FV9D-fkg9 z`Z42GCo+M&+YnNCt3HtRGtR=M1-xUBQFa0^kO(ZpV z4`IrcZ17EOUp8+VKJ&8)EDWz)%QT}T-1q@P2I~;DikpOIm*Ae&UH5AHGTS5I4u8d6 zN8Pz!7P(`BL&H8Eg%dK-*{|2`R<-2cUnFYxqrt%_MX;Yod`=3Ml-$8BqGo_-->EzL z#WPIzO*Y?!veD6F!BkSO_}z9|I-l7i)0XQ|UQZ+pg53Hy2G*zFhFXRYGlAF5AjEe? z^oSJgM!h-2AA`;WA-{>?LhZ@Yj(1ZDoXsTKeY(V~b`#hc_*4^wm`BGb=Z_!41oz8d zQI(QBRzre>@57N$jS~o_?jN{rh(0saq5m-Dl5{RiZK^!06rQflMyqdqe1zfHULEL99M2PefxexU1vsCz1EqN8p7HDID9yyZI-g=Z($3=232z#@6@~cO1D|OnjZo^f+KN!AU z!!8PqKsA}YZTw|KXYW1gI{aOcC=VK8Jz8mkvmoKLj&JG4IOpL*Svh0H9qq7?^2@fZ z%bz0^AiTl8baQ%wQou? zXv^pP@%@eW4D|#iM)3-{7Vp~+AXw-~+^Bu5)!kmhjTt6*-k)#0axIg?Jtsoru^it& z_&GJq^xG$mU1})9+mX`-DP)?X)@OK zFX{^urH#2Q+sfX+=493YyUSmb(PJ+;v;34L<}y0)M&KHP4dvc7?XAdEY@Ke{VTQcAdgRyq0|;<{I?a>pQ{%%Y&kBBIOE274=V*7Z^PIAJ^V1slz~^&+;)psci= zkcl17(Fuau!P-Z&O$&f6}u>zA^e+O)fcgYT*~Ex)2&)0Y1)J#R+oAw#EE0&C$W z{s|f8=gx&Y8dw10vCuy<#o4`GVtgjfB7vXC^sKrXZHW%O2oH^KbMZl&| zgoS-g(wd4FU6TBlyh487oh9*4{@ThDTnC%EQ+6L!6}d;GS`yIqTdIow?GQv3j_ll@ zNx8m>rBLtq3sc=X=;Iy2wzShs5oaxyZhWUI6ZHf2KRG`I%$_L(*4b(*1vLK3us0JV zS)ce6FLs$U>z&M(rm=DH#E0^MSz)QfPK(j+;Y>1st~Ej=PK2WCy)N>Krb#kPtLald zRyB7}YIPH=mf){?QPX>>thXkd_;Dv8CU7wto4#JbD8Nu16>RbrN~kv@bckX)JI;Z1 z$2Z)-9KEXOy_Df|mxjAD#nrM1^*Goy(siT53=m`P88e~r(w)@!gSH_{XN~(qt%y|2 z!X~4 zg4e}Y^QK+plOwiK%ltkuA`&(46ZR1>`AN%G-8WlTxs)#9v*yu@FmkcRF{^XbGKHT+ z0XBSRdTHmDf#J!`$MS2srsQToQ*ZYrryClkVdE7FLEpWH!x6flT;uF*fZ>B2>Z)4( zHy%x=Ll>sesU!hg{x1@4SMA1%qY_lHq>XjrkHZQT4;$kPFZk)3-PM#J<*qYrnMfL8 zvgXcDzE(RTZa6DGgIiQ)c6$5GbXD(@Uz9Vy$9m^QdTa@7At+GxY~I-=1>p~ z%P{Vw>lH!FZNxgV-pH(vNSyFgTAmL&!k0}RA+|%>7$;~rGrXlAlZ?G#qhDtEdPfjiu0>`At0 zuRYNJvGWGeT0HX#MV^`%wE!r*VO8j0PZ&TGQ`BSShpBSY`;4A=7QV*p^6U=%>dDRsD#+ouzAZV3g214_|FE8`9YfWOj)`%uf--AJ zAN-Sw*JbMc)K38#)ue=!1XUsT&m^n~>MOsmU$}{7RKks!KE`0Ngr!z>fNByu-I<@K z|55$tpu!8mh$INypIG{i6Jcp$7o11R=7Cq{#i(%}ZLMe3fH8)Dq=*Uu#ib5PGkr|* zKkkuNc$phT_{8xixzgf%huuK3^P(EAOg#G_vzWP_4LS+w05(ywKOVzNJ@O6(f#F1! zBwpFZ*uE!zKPCNW=vgdS`OBm|6!dfi^gco4KxG{!j7h!`M4K3ret!`9&)e)TVqI;` zjrvSk`~xw~{l&Sn6`F;CAN-s44?PJ}E3Tsk;Z}fitcOVVdC_sPh*deXT`Lt{uN))j z;%&S&b}2T5Z0IYWP*t6Y=W8;`KbwlHJC(SN5y|Q?Uf6C+$2CpmW##H)X^u6~rz4De zZ?It6m!@1Oe>-pPWv8R5(@XZ=Lsg~hI%DxsO|L1Y`*0CK@_0qHelL+MAp53H>;gZ8 zgr!KgnM6VF1mN*-07B?hurDER9uwWnuUJ7!)~hX?%hifPamq0qN{46Sg>yQ&WaTsl z2X&29uRg1{fr5M2>QDilqW4FhXbp&{3-P-g6Vj|&Rru2g z_6PsAzvGLkpkkuQ%T^-B!~ry;-}f5l3lE<+8Rb|!w!Mfq0LH?p!_k^~a9;sDp5EKU z|JHL?`vfLshQ+h4^!j7p%ej-}x5s|I~MspbxGQyk23#d-)jo5<&|y+M`ZS zHLpR41uvFkAz|ST(`-n#?H`h}%IaUVETC&6b zK!~5-yUp#i5$PN#Z{R`YQV7?NDtof$cfU4v1F9N1YtL>5&o~PKYF%^yL`%$RVTn9E zTjdqIN2CX-3(frw{|rxc`F^>fG2m7Lk9miDO!*!6XVo)DJ*DINnY@*)R_q6{ft|hl zM{wfJY#K4^s4`R^dJgjqGP5LEZ=No*rRA+w#J#E0b|nByZS`maYcGe+{=!x4TNgMhK2AC5tzB6C|jlFg4Aagh7NM+C%nWDeYNJu|8VSNvOm3}G8tNi31HGiS*r*bzLDfIUT*L+0M z4W8uR!enAl9)`nMQI)(wdA;}N3e)k{<53C)s}zmyBdi>TLqA}Qe5&B%16+@JjD0qr z?I^u^Sn(rLHldTi|9`|0Ff?`4@XZjXD!hqR^7)Y&`ogxrC2T6*dSRty_icR)sr~nu zxpa=t+P}&5s8i{p#~4?(3GV3R@i~h01_Yr*okd{@ImyN0?lZkkSgF2%5_WFm)bK}l zL}e!-U&V#x4GX|(V@~3$-xCxgE4)dZcppzUNZqWY7MBLq5y<|LmuswY_}cGk z9k@`_Y1Fcufo?Cs-O%{{yj*jd^{$+yVI<=Xyk#h?Y(InAOl2+6GkmqFh3yE+wSaLn zNng^#78s{FE%QedFP$J0*4qM}5wdI+le#o%0G}E!F2129ys1%C?S>L)TJ-!q+@I5I z`nkrl>Va)t<@!zvpv39m_sYBeTL|=>?gDkHd7cdeg%ATu%6rH(efCFHji31{O7Od(~jhL{WYJWXkhY?_ht|efvQz3 zt9m9-XjP5r)>>4$OGT6TkGfU%7LlW(T>LHu%}VmDOCRl{k(uS$gir{%H;KYm6Du|# z-33rWwUsSEw2b*O8Bv+&odcI@GxW?Q(?B4Z>W$v+PsNn>=TAH)H#{IWx9*uKZ6$$# zof82>%I6Ke$sP~n&Y!dSIM`-&3Ew?hisKt%1yl9!D{{5@?7dFW63UAtsoH%-uoL8j z1tEJ><;+li=-g3Y)|pGPpQpU97d1$z@YmfyGQh(>!q&Z8i&lmbeUwmUMSXe8BM8B3?BwU_P7?lM zeqBjj^xWzgBepyxu_%0hEgz!dtG`B&Ngd#k{g?MC5-RQAJ`y?prgDH1c^?E55e$dpoakwl|1+ zvTp{$#zh)2*-{tP(+4|t=i~&M0?GpZ>v8PTmyjvr^s+;O(tn)AUH}Zxn$E<9^a?Jh z4}Rd(3Ihl5bm~VtD(O)VU*8XgPMR53M z&cXV>hOx~m10eS9y#Dmx0Xs9WCX^4zRXua)cTq)|hVJaVEPRYyuISm5d%1it)+KGU zZ^OZsm;nlx^tuZ?5Qt*_3k=vS&J4+ai}A5F(6sq?#_7pyl*ZQ&E4&xg6=wP8-&y|y z!aw4OIcw1Voo^sa`gLnj#)HW2qA&_{`}HCZG@l6DrRQD$%kcjN!;Q^(k3v21MSs66 zy4Rx37BK=1AEe_XbvN9#RZvLX@-4V+WK;&SK6ZGfE11>vPPn!flIDqsbvTefip#T= zG(Z)07Ub*FHiZo7W)?B_8JAGUb@?i}B)Y~X8mRe{L+ViB((w&V8P~*t0+u~A=qRIa5|g7Q478-{{D?GU+-nT?yb&$2=L)=DP_PZvs6k;PDait0_x zx4E7IR?TN zKb7*l)yh^IL;uXzkoEXI;3MzUwr%9)2;V_v3{d#%NF3%>cf zG8^kV`5U3T+y1eNQ79^+elX+xpeMhMQ_0lUw36mu8K>>_1umd;vlgA!R{S$r2a7v+ zq`PqQbYI=;^louWqmzyKqn%dJg&B*maBNfXo!7_9B$T6dxgIZ%?VIPW{=4LDyS^xH ze_|F@5KoVlo7)^qs}=K0k*0-^_7B*U#>!=Sl&7U)0`^C4jeNE&|Ff#~!7h3(^!4p) z{oZgT+ONFUn|qN?^Ia&H;MwP1`aq+YL7kpo6E@^Ltm%2@XYTDr-o7#;E90Ok#i=mo zhJFVf2DJ6|ys~B{P2A+HBRq}C(>(R?-bW6>^dZXU_HZ5349~M9?24Wu7XlqX3tg8*+wmASI%Y0iFHF*6uu~4QOW@y=6f_w(Ob3}WZn9z+@ zwC#<(Y3ag9Z@R@SY_&+zu_Yzo`wdOqERrW@DYeskXJZsp`fgHc&3b#U8k&lU;I8MC zMg0fY$Yg4!iz$o8yMsEOsSRCXsH^4GhX{W8{e9xXnv;1~)1=?ikCrT1PI7rd#>q>s z+`L<`CIU>DW<(>j8#nSP@<3AO#%{r>_^MpbgkKTW#n8jTbIk0$66(U~pT^hc4u8id zWSH1zTIf9}i)_Po7L&a0_$DnYJ~wJM-sk$xAqOQ9mVn9RQ9d9iZiJHP0cZJ5*nHe> z4aIJfS+V@CZI?Qf&d5V0#LIvUTQT~B9nOYLn%r5}`;SLG9&^>>puvVcFT#b9h{z9` zEv=SGKP7#IFm;$t)NV_wIfnaJ1*a4#W@YWETJZJ&?F7bz1QmF9_;{=q>f5(%grPac zk)cV{a9<>g<7DH;zTFe=Ymm+SsAfni_Mt9oPQ%I38e zLYgD1VmtNMMEJWocqtOW9%*S@+wt|K76rH5J#Sr?lQuo1o@f2U5x9B& zw`mcMS~No7macoViiuonoNmw>AvBCjHbO&5j6?&e+tUyzyWDwrOwaK&EGvV-3Xw-Z%ZgMunX5B1Ijjgm64w(*Aa zl#0__!hE1RDGb-gTM9Mp(H!IqdAf?SZ6saa5z`?Pk7X zM10eC5wQJlr=Mp+V2B9)=(6|#{O@@4%)&E`;bTP5HlFa6Xy9`d=|X#Cz+lRVLgCT; zHV0OG-rLR8$@IlP+^gNV)a2Q;_hGN-ig$y-N-CkMZu&F3k`xg~e`FYMtZ=udCiTx_ znPt*-E+Kimz5pHvFL97Yya>;gQm=|S;f#(al|f0o%oZM5+<*-R5GiKtOfM6FfQRQr zxAM4of}y+IPG1XKGfUFA0Ffe0{6nN=$2sI13=LlxSB%{-k(Mks_mx*l@V00S4^AFl zaFn~^+&rv$smN%*ZJ};5n5w$DpN*y9lS>4>t>>}tNmU`u1N(1((>C>ykr}Gs;k=;P z+YGAq7ku{SB@6dN+?vQ*WmU{CQlrl)Z|5kP3e-y6CZytLt(4{&j!#(3vHWKPgROBH z6s=9#vziVt6it1M&s60|GT02qk5ex!q;gc#H{Z=mTa(K zRnL1nqF-;skC-|nr@095aTpWbduPJW@lLbv+h2 zU|Ye1>tv^IlJjvesUPE*;rqCIK1de^xx|ZSZy{ z-r=U!k!C>-Hb3xv=pYb8!y9rXC*f3zY2`bq-2D7p;Q8u}&b9_6UP`4#d*LYuMzS`?tvM+`0kJy>wo@Dm_L<|7?m_@*We8{gjOx2Ej3e45e;Cv+f zG^OeOqtZMq4$;p-z0Ol~)yi|7L;0_db1JdNVxVn*de=NDCr{2LP}xsLieH};L;ITg zmb2(cOsZnh|0J=X(`&`%z9%T6t6X)I45O0dovB&>?M$jij{}3WOpCt;wl0#U$b-od zx9oj`rnfkR>;++8Ov+z|+;nVJSE11fLGFww8ak3x)qu@KXVwhrp2N9EPc??Oc^9r@ z(LnUP&vXsV(PJT=JT-DcCUfAb9XxACj91S^jpl;-bdL!FROO4}b`NsIdN)-YaeSfZ z>$Jk%{3_oGdJB-D`gbo2*@$hBat&f9!khN-ne~%?doDDPX%|`Gh;RTu5pS<2T=pE1 zV(*iS`cZOL_Zgl(X9grZY~`n#qtFz_XOEg#owu5YaeO zI*#6nAn$WYuEe%Sizkn&92iThqT?AAd}2$TzVPF1(cgX)0!MZ$UCOGHQsZ73kUm3m zHZ^>=rJ><;F6uVwUnO-)i`5%`2#`)GRnkLbRFZw!93}?R^dOxgMc<3?HF%h>bn~0; z(}gwLzRC7LDc!!!FDRE$srKsVQby~cj?l8AVSscKmwP%pV3b6jA)mnB=yq2+oZt7& zQL3%X_d~6p46x$qc`OGPDsd6Pz}<9CvU{~0C$T7>dzF8{+DaviK?gb2RSj2OO}?iS zDw|UTKB8c)+qMb|_UA{#hE*q?{C-i#zSm74UGp-Rp@}3S(X4ZzQo;5(&TsKB`vReE zkas8yK^`FG(XhoXE^pDV0Njky7+F7x(lxACyq26=uS7?K8g@NH49i&^qsi($5vyavn#;7P~1 zKi(K92W}`DNeP3^c{^2`T-quU11fYvYlP7QE6Zj+u>mGZj73h#cBMxe^LxWt*|yKO zokR)KQ&^%D(fjH9)thBC)@fTDw883w`b%S`sX4oa`liRX7{JlWVOH43m?!I`K}3_N zGark-Dn~;6uO-6`6iNFucymW1^V(4-vLC;V7-Pt&(2BWdB0V|V3OX-GADA|#vCJ-e$>G)a~Ga}rQ`LRh+3=tc|HIwpW=^Go_fo$K~ z{W4~exi8Yy+^qRVZnL*?3t6)W`ur_L`!LfT7j~VS_lw^#=cI@US!hj6(J84S3%C2q zqwq`G(?g?{F9xRS0lDRUT|Hx76%%zz3uS>w*W$o#;`>N&gIjfK70&SLK7E)u@P3tE zfcj0aRQgM7&D%&ZvX;9ei;YTV0w=wInV@yry|qpDJx^BtD5gkihz>G=%iL zyn$@bBs2SauwT1<`OO7d=t3fU`{itN$dxgvI|VwZ-k!(Kk|MJO#y0I!Dy}KQhgeU# zi`6_2>6N57}1j;}$YmApi?A zI(*VUNIb29JL7M>M<$}iw^N@~6kos5|ELL<{58<1G*DN;R68z{Q7?jI5v(kRmHopH zb)Wsz7oYU5etn26J0ym2W*M*g1%ZJnOP4596fzXuUfEuexZ_lJ(=HQ5A{s`XJ-Sao zro@;h=}q36uNvh4HoQOv>C?C?F7MQuERRi24X8l24;pAQbJv`k%Vv^tBi3WnadHi< z5+F@*doyd(+dNFVg>)A*4eGzny#>b0JS-?Pj`&Q}Yjg~89B2T8hS%eT=sbOC> z1ib1LKkpq)SpI=>v%CADko6c*al~|2hhL0NZ><>j=(ZG%4GKPmUCHX`v1KVz<&0bv zur^HQ>%*n}6nvzF+ZnC(1 z%VRjSJ0vqK>|0_4k=A7q;_ZDV?zKq!((&RGe`OK|JcYz-QFj4Ze?SFPV#D%t+lfei z9`5tDz*y<+#!!I0jkhGIOd95T`v7BY{i-zBrF@SrMoA#P1IjcmKvZ zA>76qKq??`r1M*gd0s`77};If#8@9fYC6}T*y~(EZ;~7vRu%Q~XO8rcsSmp%IyP)n znRjkcRIh%x1)*@ZO5-GG!K}PdS3}b)$R6$)K*=v(0KYtY#GaH9!Lz}xU5dmzE3{_+ zSzVJsxGIS#?kY_6BG_!3{0XR1AMrKV*@CKA&dVrHk^u)6t$TG{6Pz+2;r}t`h}{Wk zoyS>SzXrpt749A4@VkMe(-b3nKFO83vTjLX6%mtj2Uye&Dk(uARIS@7tHbcZ=^I?@ zRq?3ltoz{)((MIK4A`*t1o;`B3{z}4nd1*M9w~?xfmR}zO zk~1g6#5aC@3UptqXdHc3(K}3TxB-I3e(@H{zKw7z7_#S-w@tlkNuTU6qt08;_Lr&{ z>Q|}w+5d>$QN&^n#7S7nyM2vW&nSeBs~N2*_tCl2tFYXm&lYYTEf?6$bCNqJlWpkn zfi8wW&O%=`@O(mNqdK`c+hopUP510smO1|Kf*9O%cDR+U5en z;S^rA)aDo#Q#fm)=wM`|)1bU53CjF%Q)_p{;S75%$>hM5nJwwxmU(PKsVqi@s68?v zW_@Y4v79`6ZCcTgU1$4d@x5hX3+P7A6q8hTQY?AT{tH*4$iOdLpQ@tanVzSlk7@H{ zLTAOyId^##54hfD%xG3#sKvXa9aNTdZ{*V!*$9lah6{pYcfU^P@|)yF;Y^Xo1{ke# zb)CkBFS-MU};u91cRluLgA6+wi^an0THBq8&SlswTQ3(J$W|G^)g(_{mf zbme8RV21<~k(%EOiNG!7d@YN_&migj15OR?`DzzDEPnrQlw^Q++YZtR8H1dPq`e}= zOUGZnOd57T63#DS)~r=%=sd=Ndj{^wLz|%~i6n{r8h_6z;o>}yD#fHA$`skGQOej` zRz`M4sQ-)G1kX#@7Fj-dAPfHoV~I-2=EEuU&;$9G)ZrD7rZA$k}|&;B>fyFUAW6B9~;`0?+It6kK{59VuU9_P?{HD66Ksb`EE?rHOGIfnEaAtsULwzzw#zviHX zK%G3qV)(x-G=vXZ*Q8>zrnArkn?)CWo^Aas4Y>w?zBwz5|87B@y6x@YF*n}0R(wHD zY69C6ykEf~fj}2N=~0ZnW{ZI^6?Cgx18 z$2TRj5G(AX5)DRE0g@&5T81cF){yX!^=4&2^pB7p zaUWo?(SH^A<@-Xc)0>fA+>S9nC{ahres*OsRugCY(~!M5u`EQfmIK+U0)qkhqp6Dv55@X|AP&W!x7 zDw-s}F#KW_*+w(&w8Q5(QjRPenn=obJwkjMkT4?~s6Z0CcKxaR{}+328C6x=whMy@ zCm7 z@uTz+b1Y{IB0iY5FEC%Dmi|V1YJ8kL$54G0B>KnC9}~BJM-#WBx%V4$+F8!G&hNn? zY=|2`@sptWMZ~HSZ5$F4cBbS0M!?2Ur2|cuzz@%U<8O-?oVuZ?tBAbUEFsGWPQU#e zFMIjt7~1=OBY0JDH!iroqKn(P@ceX%zsTI6NZdW{$5heT#h+)T^`^qBNSdDhHb&zN zh%{ni%cnmE2=reX|FZCZEf7#6efo#?)kpviCGtO{a3w3iGNS+fP@r@LHt=skxi?Gp z@i3d`AFKX9LrxpFF_8Q@v{zVF6~8a`|M*-!BNQ|IwiPk&LS57Fep`+I8HzmSVdV#! zKi0I3|Nm?gfP=vF*@R9?>mjh;73AhTE^NPTQBWjA?#BQgIwPvB|8G0301H?hO2E^^ zoPYG&epgJAdJEFXftSJ`_I*$K{Rk!oWpcmrYQOq_`ahCJ!@+jh6)qzEYyW${Z`ZR8 z1roy6^asDKFy0SWT+}YxKTPXo1>d@0&>^wpAIEZ%1~X(q9__bj^>&Ecy}$NX{QDKN z%V9i#ut32ws6iPJ7JTcai&0Re=2@UCc4W%il9Xwr+>_z-S}C>A^-IL-!?3t@E&t+` zaLm|L^oe?Q*$Iv8u7f-6acz;IZYf7|ox(mr6chdKscD}DlPrC{ItGVMNkW``B}X6N zmyY7@!~?bSCTV`0a$%gU#=E%uXg~EAHX7X?FSABJr_QCL7#e?OlE~OZ%{+oQYtLhHTU+TT z*J(+s-ZxPXF1E^Dt`srbTU>h}vtLSFdNU3!9EWim{bmc6N-C9Z~F`Oux0 zt99-#R2D~rq(a-yF%BFDI>YB@BNTokk^ynd(rK7$4L4l8w!Qu?`^h;@C$T_}hznYy z*Zx=Lcc-hFoo5Mu$CL(nbq{CqBbeBWE_J$X(uk|SbLm%h_^;mtyu33aKz(x_ALSa0 ztQNFY=;62XuNnqW8GQyc1AqwU-vs#Gi2;cHCux6k-*=IquGvuB$A7zB-QwNInj$TO zUAr=&%8y?(^uPXoOO43Odn;cUHe#H8!;DT)EM&Ff;%ZF|K*go!!W$I=ho7!l;NS9t zHEX7l1DJa!>cdtpg7~1?4PiPI-qK^{bMIQ*0c0SNsIo9^@{J`pmxyk!6?EabB)d_J zj1``*9uR}G!7<1IPZ{Hr8!F|(n-;y`EnWU&vhXMbfg^Az!=5q&z)42Om$Mri> z&)z=2JXO+4+_XCy7Jr)i7LdFRwQaPJ$??Qvn6LE-ktMX~OR<9bDC|eKeXS?~7B-5r zZL<lSm`h4m_O$ZZ!6T#Ug@_xtpDET5{QR}w3 zmpJRzQKfl{Dfl<}>!@?L(TrRP#Q~ASbSCnE5Mh_!UCi%gw(`=q{87cqv3g8Dg=tew z$L)-9SDq*(`8Hk3VT+-AYnaHjFVq(+9n5$|?3sVns-L3#R9;RhTyP$zH}Ru`V!||c8O`sUvKt7|L+~++2cB}JUv~M?Ay`#plJy4wR-`m<9JBT4<9Nxe z*RWg@=N*y1l38=dkb4Jw9UMC|L!uV z$Gv?C_&)&ngOdB*GG=)?)d4GOZDbp)>@UqsT+%+z<<lLbO`iyHix_$iKwy_GmA;(TD<@R~@*hkEs? zj0sKeQMFjD*4=>dp5oZr@wusr$#vLMx@*bo4Jt;YS_|B@Q zG(t<=)8w?G*{Z$>APB(qiEJ7^IShTvw0=T0p9cZz>;0)VgRs(irssz*zW9?b)q4|R zE0bRsy$ld!6pV+$HC#Ix&J`()2s-ma5-TIDT@Q0ItsaW0-!0kEfR8JzK%<~#Ugk5P z_A$2gTU|$T=sbeu8mc4Xr{$Nu?vK9G*vyTV6OxUu4J{@GR1ag}>W)5)^{;($hOK8= zoywaGCk<*|Y-OCnUJ6|4T3~J{VL(r0^_r3x?SMN3fBe*izNy}h2avxgn&`E`IE~t*e=v&nMAVs{t=J}vZ zGc(Y$woQ`+WF!4poi6x34;Gkp18Nm<#sen>4?eL}V zpnS)?^8gB|-mm0J?hpr0m`RlgIfU?+V~mIl8Vz`m?*hj8SyPx@6_ z?HM7#_-^*ePhJRdlNdR-U9=GOT{DnG?@uv)MX*-h;D98B&+C0Rn0E^Uaq@p@A`hRl znng11roy^cmA^REfmYG6Y zg559RHx%DRf{9KDdmpU0t;8;Y;Ajxf+9(<^R6@a+2ba;{*#9o)sG;! z`l0l8v?uoXSn;TvnQD8?ySuj|;tNS@xlDWfvw?HVWRETL=l~yEW?FK-kfu^aF4<2W zIyIVQ@Lg0lA`26dmy>qlL^1ww_6f)0eY0%IWBb$#89EUoMUxBOb^4Fa^Oy67sS6DH z6Fj3NX^mmpaX=19Of0jDP3-GI zkd1To;pFF5V9C8H-?}$rNtX6-cdE`U-a4A}>?=CwsQE*Q-Go%Xc4qz|7>SeBmB%ut z6D?=}F>#o)5%LL}8u>dlU5U%lzE|HAj#wr9@++r(3;k6>7AInZbPj7uuVGh1We%k! zd0L^>j&?H!7COY}L=bDl-IL7`^~yC96LDJp(CMihvYFM536xWBY3`d#sZtrB_x-o& z_y{iI_v&&n3DjOI5^%(?D|?yLZU`l|r^HX;8b5QQ<}w(OsjVd7$m$Rej1r08V(CoS z0dC<*#V6x@s>?*fy4LMl!k)rP(J8rtteUZBT#2;=S|Mx9e*40>mZXXBbmC^`GsUxA z(>)WekV-5>vUg}cMhaFyj+8=o17E0=0l0NY^{W!!ltK4!-7i${JG?2%0GA$IQ&|SAC zYtPg;1jAra_9%~>tE*jBl8EVI;B5$tbt>&g^aZ zo}`W4I64w)Ej7!2r@E`>?teN9tQSOVi0w6^J8hlz)^iS`4Y8a>XLcf&g+-M>p*`MbrbHp($ zpmN*eLinoVcYjb>jZ~$Y>%FpbG%|dWtm}nu(VfKbVjU4&P2?`cnO=L1Vi_k$nhK~Zy0r27%?&=l&_I=Ue7_>Qe$C#sm*N^V5 z#(r?0$^?Oa>CoFyKDX_zN8;Jz_;|!3pzntJ=Vb5Uu=&IQ?&+?F&PVvY0ZJM6eY3@1 zvy$#dPz5jol7E{+zo8j|*{Dkq^8602)xEpw|Cvi_yz3(jgixaJ>TkbKj4eDJF-;+u zL8US0?G|-yyZt|tI1pr^6h=A-`+fE9D$MymOb@tf_#59J?*U3qm+n8_0StNnA1nCQ z@#y_an}nnC=bf;q8GWA7{8zu+?*D}az!25@D-$feWP>dA3Pl`j#^L6tNUvZf28*)U zc}nU$OX=UOx%m45OX!9`5NHJG!}pa_gv{O(+K?-U>k}61Y81Q$Xv~lvA)Lhe5$1s& zO4q}fj{3O$8@si6?S|dd8cudgHLLIS4hm&&6VgWt>)oF;{7|g642t%$=VSO|c|eqC zkhQIg4%x#x<~2UG=fV?_s-<+1)bq4Wi%AXZ{d0%zZ9)r#*R5+_%#1KEixg83D%b=| z^Vm;Uj`7Uh)$pM4%Er@mZ|@Df%F!8}IhIDzA;iB?$%XKbZ6TKOF&U{ex>n0xXbncV zkq`N(uw(RexhyTDnA%Xn7IUla$JJ9*a1KZyakPX(2YOrtGTUHt?)EdX0msf}JR9_U zlt=bQ*a@W&UJyFfj!5w%NXu3<7ZDMWlnj#X2Gbc8>n14SFQwMp3U@g7-RM@l;7Y2^ zp}*D>#GMj)HOC`I)JJ&k>i_AFWvIMCH3P^Kb2of-A62>-K|#SxiSQIYXB!U>{@NLl zkk@)SgBV~z7jFB%G8fQI%~YD8%2GG zMEAjm8nnjcSXy#oMQNdW(pdMbZRKx2?Ra@8A>Pr1o@G6< z78Z2b8yuG7ANVXJ^6?~A$>xYITU>{$bwLMa4UACV$xk4R*mN-p1qFo&P3z9~`3I11 zzp{<+whryN-cUtOfdCg4NgL%cg7y58uhG`J&vdxd?5r9&Sz_6~xh*d*$1>{)dR{u% z+S+z8j8C8-Zz7}PhY)iOR*qj^Uq?noZMD!Pv-84Xv45TQv)Le36{qVgqcc~=Y17is zdzZDA_q8Z?!~WqVjA(J)Bssv?}oFDXd~Wo>xlF4@C*(Pwg+IP zYFLJah2g%eb~9ECJPS)APm-ANH^(HCah`)FJjWbkn?653$D>#6uM7*sp}t)qhr3)K zb|-)R8WbE{zyD0`aoR_upZ@i;0Lw(#UcHs>*jD|u?p7dcu13yEXVjIYh-ckO5hXvr z2PEyaPJ;uF=VjFe#ro!^S#P7z?fFV9qt>j<`Q@dFiAjYaE}EFOHp$sDQ0OOLU$Ny# ze)ImNyw{j11O1uSz8r2a;IK>KFKwj>)s!cSxdkU8-DOAJQjjF+&WmpV{4tew8LLNUD)U6Sq&7@arGq>6k75Z^< zG+Qcey4vz=76kUZ+{$egc;y70WHId0uKVzjZyMXUpvm$bI=+j5Ihh4<9piyp8?~dK zqGhqRlmjoRC+@>Xj~M9a3TAi{rWecc|MB=h$*|7CJ}MzY&t)ME_79>jrg17}+MQUs zZN7B+tN&Jkn4pK$l+>cytuL05&2*4~hGy~f=_BX8S&g#y-#Wy|<~Uw6j`%v7w6EWJ6|ZJ**PjN}jB$cY6xQ?m&v zT%H77G;AyuObCD=gw`Tey*KgO*RKaKgmR=iNcA~5P&etu*o5h;BZ)QK30#E?Qw*#p z@m*^}2Klnbw03ZxhNA!R*cuGTWd{FRpw3wFY@V^37mYa6SeL|B#Yi2ghuI>+z^lu9 zTXUYZE$rcCoVHh#El=nxiBDdewfyN9+YyQhVram7g{%T`1H?*mJX{Q7TNsVAb7hl(cXaON%}3KJX=`hb5~(!M zxL7Z0t@n({CXSrI>Hq57ahL{tlxz~GwWJCSZ|gxD`alwwUaOa8z&)pq|G9c*i`7WZt3G!fc*8deD2TumGEACyrcP9FMh}d!~9>;*RpGNu@Ud;Gj~2JR-CH7 zp9)EjiF8{yg{e>T)z)Vc%L6y%w!rqnh{=;oRU0k%Y`rK{HbALvhQPyqd9=FYwXMAH6^bz`r?vm;Fc}tI%Ud(RbV=ZlF-h$Ar+cfyO8X zZi&ZJy``9{3S?3n+$<(@E z%;3pTZS(5HzQfj8ohY&7+`TH<(lF+xCddvgT0RiTIgx8y)jSxhVQwujo0?T8k)hOi z+CEbLqy*|kw;IDCA>1F!!;(l*QXYy@NKHzr49D(6Z01|d2H^}n#UdK~*3xu`c9>h- z;7XoQy~UF^{@lu7vwG%ldq5@et3Y5s4b)iMrAj&1$!-Tz3aaYx^w7ATud9w#5e@7< zJB$~MA%{_}Z3)aqU}W1M0-4O*AWw(CGBEqXYrV$kgR2u*D66A>BF(13bx5Xy=mheO zyO#O{#%s97;>jvMxEt`iEx^@EHw_#xUlqjC5z?(a2j4D8EC-^k{Nr3k%hJ|0?9=Bj zkNoDN2a`(B%o=&JRxZDJkBCpV*VpOMVxXE@HGAxbZ#F5>CBuX}Y6B=xA(_;1=fjmp zQoyys^HxMXUCx)TxsYNO-N?uPj{QK%2V~?2A9#~JuIz^(STdDGR#Uh-!=!s}b+^z# zYg%#2Xb>dcI7Sk%nHA2_B!cJSH$=~3mPeV8FQ0~M%lckSiq|HK@)7jzBbzob9Jm== zg_5Nw@KJ!m7k*5GUS;e9hs5SkX&+bn!NBxWHJ2VP#}oV3xa1P}h>$g7Xno9z8;}0D zNNMg@QC>gey)1nNSL(1gSjqci%O0~?jec>TSZlfn8ps2QJOphj!iJ!&Hv{&q1mR-v zv>|$?4!LxF*!^q8i5~V&T!>7#$4Dg_STJq8XNw1)L_4e&sJ3E$}Ue7+F z`~S6CTS|wCxf-K)C6wf(JuBp|43Chk6Lsy(Q%QU#HTt=3Sayaz#fjp)`J zP0MD|U#HHOl9ED8OKUGvu2&GXwTw1`(A)+eOPabm&MLn(--(p+7=H7b_|r79sg0VM zRC|S!B6nOV=u3iLY}NBt?D+Bo zY|w|Uns2ZHEzw5#lySCcNPqGKHbEDGZce}q-h8mYV_jpP9Pu-i9iw;e-l>)9uC>Hl zeC4(?I9U8X*XS$+S`Sg~-Ee+q8UA1EuLJq%nb(o#zVz9Xc6w%Q9vI`TDV0o&3P&^F zB(Vgxm8whmoV9)5%F622SZTm-Zvw^!v))COXxEE8oaAy;aYS^}S>%gOi3)0-UG~a+ zp+h4#>3x;7*l&$o(HI?A>*p9_Daq6&CxkE1^dDjKz`sGhJb1NQugF<6FNlf5Zv*pUsGuYBPcGBUDxb!<$`%ZRtQiP-Sb>M`Wa zlh?a|ueuoKOV{9QuQ_8iZHeoyPv#R3nbcG^^x1d)+z$ zX=J@laa3UEa!AOdz2N+=WIpFuCLLb=w$ImNC)%?|`dXFFVsU5iA6m|Q0| zV~j(fjLLu51ax0uZ7OrxGB|%F9obxcQZF&p*^Q++0xb3(Zk>+I*V-SAYe1&n^HQ+q zrB`65ZcDyp?8WV_MTV%ue17LVxCq^YFS}#k2r37Z+PT4>0LQK*Rl3k?r?Uj#PB{Qa zN_wz%=x=2_p2k{R?>oCqLxf&pSO8h3>AN z?z@<9t9-e_qkv;C8)ui&f#;;8boTsn5#owYT+Ke{0&ikhsSF`@dfzfX|Bg1TqhLS^ zO=C|>9UkJ1rRHKZpwus;w41F^(4lUgM!xf`Z$Hl&*&4Orkn6*amp%*yVoyAZ8J|@l zk}k7K7HjB%2$k z;a|Rd0D*!71BWbJ&%eskNKNA0Cb*hZ7uIZyZzwt z*V(O1SDCXJ_eMuXV%wY{ML(dmkg|Lc)uT+Ol0bK+*d~44Z=tpe?zo^Uev_{Z82Y<& zaohadTvG=J_GZJ`v+{YD>xFo6o!s|#SKB;$9O`BeyKje$v2T-TfLD@ql8s?EtQWLp z5nh>t#Yl9no3H}N@^m{pIg?w1wAdvTSlveeX)$_HTdy z7=&ud6gAOYH+!e#1bEDt3aBv=Dd|*6CY1p*+A8><(M8%#?_UqN4fa?(O6V@w;Ce)G-vy< z0u{imQ$&2Z5*#PK$Nu4GGW{kqCEk(tp|N*JQM5sTSKbl7EySWu^dfu0e18`E=G-nj zfNVs9*zg3Z+lL@v(jULL_})fBrOItB`4c+y=_a(8=d$J2QcDU%Vj+BBh79YRslL4aWsWo1P^Inq9n0^CtfBqZfE%m%C5Sx%vw^ zUJ@RNONz;rxkP3_FT2>R>8#6tsERE_UF~b{l4%C|lvz2Pb;~@jzz?1*ZaNlCQxW?m zK*Xu1EcQ7=kQwFGaF~B5!N>QYd9>D1WJo2s<+ZG&87cqxy+2AWet-XI$b3Cx0!K!b zt$P&rjBdGeaZBEGl=3JJCB2d^uja**1-$SZX`JN&QVZnq2aq>eZ-t)n6&x8iTThYj zCnX1w7DA%!Q_V85H42<$B8KH`0@nU?(m<7)r^31+mhnAi5QUKpo;^J=M>hc*-|D5j zibjmV6arlJ`l|;0K_TmGgyNCHC`pfTlosO@W9#9s_ZE6%XUEw%bXt=12@|BL;`%JP zp9jAGGg5s+Of(Z|deMrpN`f8BD?8pLCO^LxR)Qs7+9w-ltzmE}yykk7t6R|5UGH1$ zdk+R!46|N~wsLbY0rTxaJC<6B_L7sb90PveXs4ICW!yQxEtx5C-4N)(;B3x#PjGPX z@zIfeVMJu4J#?~sr8|6MV*_1K^!Rt^a`pgmIxV`dmgxz`MVeyC?!@c83hC5j6UcE? z`qN&sm&mS>)y!PQFWLHQx28qXHrij`HOll_HOokk}lRLoT;W$@4t%k$^2!DgUd3g>{c6yW-5 z*D7lt&V`*rfu&SD`pBo^sVOP4TduE9sHEc$QsLJdyR{2W&dw4({^CC`mAa!ftg+KD z9%b@Y%JJu5nLZCN`#^9p7}F5)B1h2Fe|!su%o7AGRXbg-=`{Rc&pytWpBXp5mbd65 z=kV4TZ8jp4KDlBn2UeY`^)WyBj8`V#%k#uILE?O6x}ZT|C!YippN0?ohGjkO?#b$N!`SWZsms3MlZ=xYd*e?OZI5KQly2tFlBY7A)p*1xRY%X2hq1|^a~|CCD$bo* zt5D8|Gg?nmEJk~I#om|I0sT1MGFv=+4SSqXl%$K$A|{0x`JaID_5C*N$=!8o&mM~; z#-BXx8IBo|LT%}mX;|A>JC}4d$Q9%ZRiLw|&hQ;#vxq5qCgFHeWn_^JvMG}pZ8eh2eYawdiWF)~6nwV_GYcP{u@4n-}NsOSARg(=Vq_|^R(0*?DvrToC&VBGjc&p6)~e=qfu!xdft8WQtuc)Jlg z)+uIBK5S#gPUP%-_SzRkr^O>7u9DL7q##-m-mc)QdI?W1Wt$!HQoU?-dSUT4XBEY8 zap~POaD&{xh`Dsx{*byiu2zlb8_bedDuT5-$yy7T%MqzRYM9FB#oAmWQc?sByFyfp zH31l@q^KxlMu!?8W-K}*T)R&fZrx(MXT;_i?|^~s^{t09wkqqlV&XGw10Gh{}9*CvsBG9Y=7v)26lU^Ot%^W@ZVZ@ z3@rudi2Y`|6lu+bh?49izu&xusw0Vl+u2tBBg-qI=BBqP@yg3$u9t4p;``BSDLqon zN~+p&3l4l|AM69=%K0@8q&Nz2{+qZ|GlFn~kQr@>lMzjzZA`rJ&Ls44=$4X~k()5} z(Bhd`reKUTF@+YJlAS>uCtV0owL7DfJ^2S81U%U;zc3QJ_#e>q|9b-V6$_GjpjM8D z)mxUERVQ^!N4lRE_1*gz`1G43kS`McLAU;`>iJ2^{+-1AZtn&xW+{Bn3at~IO^l}yx zA&>r#Wdj@ipcVGs*ZQ`L;M4F=x({SdJfb<-2seQeWVJ|(3T+>_Mt=j-$K&HivNZRo zd?fm7K3AOX@tCr$&W7=pizkb;-$_ItKLA+urYMVqnvIG?l6S4q*#L3&a9f%1-)ua> zz85BK4($5-n05=keVVM_uZRS!|AFN$>BH=eSZdbgmqa#tI_dxZTaf@us$^X5`*-a8 zM6eKjyXY^&pySLf>S3eVJ-0F^;LX4I{6g7&0J1fa7=b(S9#hECzea>k?Js3pxd9oG z{nd7Nfg>6Q%1_#}UJ&kR3yi5QF%xoFO@;gIb{5Zao;!6nsdUw8G)ZL# zm+0=EQ6FJ-cY|>@|J}3R>94&;hA{M8w$>BII@21?=q_&Y-~p|4#efJx^Hj>*fVdx3 zvvQv;1OP(0NGBKa>gTjG^r_YjV!U=q-!%sFScL#22fW##HF#t%hy|N+*V!oK3*WFL zQI72Q>0dwpNWIQbKgg(5!m8Dyu=b>7njF33ZHU!*gceS(J>G1D>hL#5v^(j-FSP@d z{I~V>xe-l9L;73o;GL(W|F_!V6zlh6ef#$U1B*W?^!ow73gE><`Ki6RJK*r&r4|4G z%Mkv%Wdu$Y8;Yk(*iOsL!e|TWmvh-1TBFSpgae=7PLd;Vl6b?j!~i(kVqx7?kV?@c z*3`BPi^p#ndhdy|rLj5BOTTQzJyjgK>e#L?*=RNaq(A!UJvY&-cnZg9*$IDcOWANo z|EmyfrAX~%;G2j?9d{Y zb&}j1UN1O@v7d`u@z#Q+%^ZkAL*xcFocN^!)ddH(q<7@jE98pCxjZI6Q8!_3 zgICwn1i^0binS`!&vw#R&kMJ7yYxa)24}CPI;>}lPu^$q`k~x)!u%BPStK)1Z2Y*L zSztJ^-rc*pLB(Sm8ykaz(H)WR-(Ou@OXjxwG$$$|@=3L@8Q`)`cV|Ky z=W48HA}A#QhFMBVDwGeP+qpsOj~b7&vLv_4^71y@{Ls4^M@ZM~>$ebs<$Z2VwlP}| z-qa)?hL9X}3-)$6_`3O0D23?gU2h*Pe54O;RUFxdvHM5P9+MCB{GjL9Q%WDcHvQK7 zz4Gj7t@76Sn3k0=duUA~7-cAS%MvVqKfEPQz`2rG;OR!Dm-pPj`}n#h{-?Iwt2l9q z1o%NtOqY_fLL_^Xv@9x0P!_ShkPcoa+~)Hdd&pac^i9*0iOx0clLaS_?C2 z$hjaVoT1Y!kZwXifBYVpx__k7)hC6`RCZ^)IF5y-{{LmoNWAt2ymzI*dpdH!ZJUy0pnG*1C6 z1aI21G5-MI#S5+*q(Ob1ot^$c4)cx9w%cw?eguQbe5Q$m;p9T|<%SX~ol+cbV zm3RUgOz0CmL!Uv$NdB9pB6C?&X$hv;_1g~AR$?UHPVI)^Eq!x0&?eirqy?riGe>Bw zW6AsgpImrDoHdCOmC}&mNosj_UFYnNixNY8()50rRMb^Tj@rfw-uq^kO+$8@iQZ>g zDM~b)b0vzF?qQ&u4)AR+%4Tvdl1%sGpwVKXgg!4bAQK7x38vg<#$hvr5d2uin!R-j z1cM|p{PBC2$9U`26P_lx&vH4+2w!#`+bID>eKX{03n06hVyP`1g6b&F@NP?aT{w$* zgu!2VqAKj83fQfuqnGz%=+$Uwybr#k01}+ z6dr)NU=c1U`v zLenW9F}KBG!?T{6pk_gqM!8&>7k8T_r`uSpY%aa$Onzkm*m4~yim$|5e#e(4(XCix z$Yy6g>eDGPdM#E&h(rv(g7ZmgykiJe&Ej>U4;Q|>&`JU{Z}^=gZvMuetoGj7QNy=d zq+!myD#hLrR>ySbqaZOs#tD>LdMWIZkwUgk{le1Ly+s9DQ8j|JMu)8mfDKW~l?A{= zm!s7l+z(S_eDBcL}ZDk|E(2itFj zlaY~mo$bv5AVSmGygTj}bHi{lfws1`0(1z#kPEnHaF?I9rg71#Ff;e>+ozyr%H#|fnp3F%UrRF}j_K|&X zl;Q&Zn%RI%P7ihy9QP}~6wFKl(vn$Qh?yg?1r10z&R|mcf3uU@?Hsu!ItSvE{7cvy+R`-K^{xye(w7TAvU}Z*OU*t(X(!fW-r_RknmcHX zd1E-}7E`*!sq!Ln_|&+u;5I5V*ZSJ>QC-X%@LvLC{fXs!RzRQ@r!p#URmAr$SG$;? zpF$wTPL{0QYDbz$`lYq?Va&$n>GU*IQa0KH40wQV3-Fk4 zF5++S{;`Ra_yk3<#7qF#aIdMu6%x%=bEjrb-M=`WYHT$-q4OWok_*$Rbjqe5K_p zAXJODDjs-lBs*_T08WL4ojtnEZUC_%4fsf1_Zy0#+6Awfs!sr<*xK6a>N7YnK+10R zSYUPme(OB_Rt}n*>SZzCPV?7(BIspRd$}p)WUEe?^%$WIhoi*rEzW4&mw5euP6wd2c1Mac|>Vga!nTC0XfcBdJD{KT@EptS61Oug{P{Q zUn(y0N?C+A@ZNFfBBnQ%PJ#v;g3~FMK^b^!haS&HJ*dF-tL1uKP?}uQ*IxPi_)Ha`4X3j+}j(aQQ6lQn_57$!H$oq@|}tl z$7D8XQrNkDO%xETX~swNGRudEkAOsJn@%-WNiv7;(hu$!i-qmQ)lQg(fyX>R`NC_7 z)7gz}VXL*Nw{>^0J(=5-iiPZ>$;5{w6f-w!9cI7s-6FNDLlV*=+4iqg416{Xeq!OW zJV}k!WZQbV@3DTBbGJZbe&105WlT?|B}r{z(=Alm|ztS@b_;yown>kkkJB;dAIna_w@=n z3k!=c3WnQJ5B1grz)b~pu2E~{8T$-^!SLl^=1EG(Osy@K(7j^SLhq~HT0l0b_`}KG zo{p9l$TvRT0-R8`e)ROtsfvgbMwN2OHkd|UKUCgUGpfh+Bvz93xjaq1$q&_{-NdBTbfy70wH;) zRR(#gr>@ymsV8`R3VG~6FpsZZ;y3MgO{F0+bgSWuRm?UY0WoFI=&sbp*8p+79yOZ0 z%Y;AX=S#H2M%o@$tJJ zLd3+xfQlk(RI1*o-5$rj|Hq=Br_(e3Kw&$ckz`%H{e%5>P}K z;5Y%fu3qUuFMVm5YqLhB2_P`ZYgc<1sz;%mUwvZ=P2V(cafwsW@yXdZU3(FS<)sZ4 zZ|`p!RwW0szzs6em(gguHv2h)+Hw#tEf(jfObVGNUq0>P7@$AZcE7%P`jw5BkMtm` zDJfE=gQ0eFpQ=18JbV$FpaUcfg-3<%~RfgY42{I*A$ zEB4EpoMW;%zZyHUOTJPW&#wiy1ge1;XS6})6LT>ZUha0We7xM~^W^&DVDITmY%vR6 z4Qc7y1OSRu1B6iDc6tCknZXg5wu*w2MoO7S4QHbm=IvM_F~W;#*4qM*Jf^K0=i^$o z<#wl6`>k9l(V6}TGJFe7u9cOQ$9W2w(>2z8B@3H?n>I7cyU^esFVzDJIslGaGcirV zXnzvWj1}!Glv)>h)0>>W{Qdwa+b&#t^dbZ(1cOCR zLmW**^;-CMpS~^)zFDV0-R*_7h)XanWF?nPJhF<7kY5X$uf1ED5z$W+ z41h?)C6-#-(6(&jTA}}HQTYnEbTNa#4gWMJKzTQYNC`L@Kp z<5#xcp7h<-(lJ`Mc%r_7xeP#x1FimO3s4(9Dd3Vp#6)=p69REA%kdCJ*15?teNpqR zn`4ZL$7!@zf@&Xa$`;;O%{80=2fH7PlpGhwW%T1QS}%KKq=UV^eHRj-lWKN9GdTl< z)eSHhY-gghyQ_ZlLhgL!r4JwjR_%iYJj4t?5=UheGnN`gLPvFmulW1 zh`x68+orR-YhTpzkB(2>Iqb&5M3w00o6OnJ?j7kWJu4T<95K7H)MFrO-h7qag|B1c z!8@1$dT;^Y7r5RFq)iskWlEsB770Jv3Bu{(w~hvK%r>hfZeCA5Kfe74pT2J<+L?>%wq)zo7%g}V@f~hFls=n+N!P7x0~gXE6aSPn zJn#a9m%2JdZyW4wG5ywoGRvxUXxC!-`kTW>S&gE%^>=I$~PL|dM9@RV7ex~#DK0jAWVlQMo%v{XCy1cP!O zd*k5!a!dZ2Gi1gnKc>oP@LgzV(tZa!){}L>VT7ci_1k`J19<#9fPAVv7Xao8A z`G7Le)z!6&#wxGw@q0HnUhCXcqJf*2_uZOP6zzMFM#b7goQEu`yL9uFB@gq`(tPIX z0gCEGH7F>EMz>Nw>{Ph^Ys2vHuo3IiITkjyD6j0$ucCeI^XFC|5C|P8w*g^yM8xZG zOWsFmnE$ZJWz!w~W-zq+oO8|P44}D50R*$I1kyEtt5bZ#2_69CY)7O5?<pPX;`zQ+TZ{uR=2J3fSOHu8Ro$%@tXd}#^*CIjf10KdY{ z;?2|JH-m`aByfW2=5?Dx-FyV5tJY>$i1^D<_K<^``7 zOQm#ATs7%7Z?71r?``nbTK*GoXK2Q)x^2hbSkBhF<9qUP9|Obcgftlt5>VBMRY0(T0q983Epou)>li@{9ncDVAV^>YwPt{T8#J~C1Q86t>;MD@m_Q~2 lff#5~1qeos7&Kw=qh9Tqxyk!GOUghIvd$@?2>|eQenkKP literal 0 HcmV?d00001 diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.svg b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.svg new file mode 100644 index 00000000000..18add6c9856 --- /dev/null +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md index 6cb4c724d72..03e75b6ad89 100644 --- a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md @@ -23,6 +23,10 @@ This guide shows how to build and benchmark a complete AI inferencing solution u The primary AI model used in this guide is a ResNet50 computer vision (CV) model. However, the techniques used can be applied to other model architectures like object detection ([YOLO](https://en.wikipedia.org/wiki/You_Only_Look_Once); You Only Look Once) models, speech recognition systems (OpenAI's [Whisper](https://openai.com/index/whisper/)), and large language models (LLMs) like [ChatGPT](https://openai.com/index/chatgpt/), [Llama](https://www.llama.com/), or [Claude](https://www.anthropic.com/claude). +{{< note title="GPU Plan Access" >}} +In some cases, a $100 deposit may be required to deploy GPU Linodes. This may include new accounts that have been active for less than 90 days and accounts that have spent less than $100 on services. If you are unable to deploy GPU Linodes, contact [Support](https://www.linode.com/support/) for assistance. +{{< /note >}} + ## What are TensorRT and PyTorch? ### TensorRt @@ -42,24 +46,27 @@ The following prerequisites are recommended before starting the implementation s - An understanding of Python virtual environments and package management - General familiarity of deep learning concepts and models -{{< note >}} -This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see our [Users and Groups](https://www.linode.com/docs/guides/linux-users-and-groups/) doc. +{{< note title="Sudo Users & Distribution" >}} +This guide is written for a non-root user on the Ubuntu 24.04 LTS Linux distribution. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see our [Users and Groups](https://www.linode.com/docs/guides/linux-users-and-groups/) doc. {{< /note >}} -## Deploy an NVIDIA RTX 4000 Ada Instance +## Architecture Diagram -Akamai's NVIDIA RTX 4000 Ada GPU instances can be deployed using Cloud Manager or the Linode CLI. This guide is written for use with the Ubuntu 24.04 LTS distribution. +![PyTorch and TensorRT Diagram](PyTorch-TensorRT-Diagram.svg) -### Deploy Using Cloud Manager +## Deploy an NVIDIA RTX 4000 Ada Instance +Akamai's NVIDIA RTX 4000 Ada GPU instances can be deployed using Cloud Manager or the Linode CLI. -### Deploy Using the Linode CLI +- For instructions on deploying a GPU instance via the Cloud Manager, see our [Create a Linode](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance) guide. +- For guidance on deploying a GPU instance using the Linode CLI, see the [Create a Linode](https://techdocs.akamai.com/linode-api/reference/post-linode-instance) section of our API documentation. +- For a list of GPU region availability, see our [Choose a Data Center](https://techdocs.akamai.com/cloud-computing/docs/how-to-choose-a-data-center) guide. See our API documentation to see a [region's service availability](https://techdocs.akamai.com/linode-api/reference/get-account-availability) using the Linode API or CLI. ## Set Up Your Development Environment -Once it is fully deployed, connect to your GPU instance to update system packages and install system dependencies. It is recommended to follow the steps in our [Set up and secure a Linode](https://techdocs.akamai.com/cloud-computing/docs/set-up-and-secure-a-compute-instance) guide to configure a limited user with sudo access and secure your sever. +Once your GPU is fully deployed, connect to your instance to update system packages and install system dependencies. It is recommended to first follow the steps in our [Set up and secure a Linode](https://techdocs.akamai.com/cloud-computing/docs/set-up-and-secure-a-compute-instance) guide to configure a limited user with sudo access and secure your sever. ### Update Packages From 7f8178142c029665226d33d7aa7ec7cb0f8d1b46 Mon Sep 17 00:00:00 2001 From: jddocs Date: Wed, 16 Jul 2025 11:05:50 -0400 Subject: [PATCH 5/8] copy edits and diagram swap --- .../PyTorch-TensorRT-Diagram.svg | 2 +- .../index.md | 99 +++++++++++++++++-- 2 files changed, 91 insertions(+), 10 deletions(-) diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.svg b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.svg index 18add6c9856..32ceb68569b 100644 --- a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.svg +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/PyTorch-TensorRT-Diagram.svg @@ -1 +1 @@ - \ No newline at end of file + \ No newline at end of file diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md index 03e75b6ad89..809f54ebc0a 100644 --- a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md @@ -8,7 +8,7 @@ published: 2025-06-27 keywords: ['ai','inference','inferencing','llm','model','pytorch','tensorrt','gpu','nvidia'] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' external_resources: -- '[Link Title 1](http://www.example.com)' +- '[Akamai TechDocs: GPU Linodes](https://techdocs.akamai.com/cloud-computing/docs/gpu-compute-instances)' - '[Link Title 2](http://www.example.net)' --- @@ -46,7 +46,7 @@ The following prerequisites are recommended before starting the implementation s - An understanding of Python virtual environments and package management - General familiarity of deep learning concepts and models -{{< note title="Sudo Users & Distribution" >}} +{{< note title="Sudo Users & Linux Distribution" >}} This guide is written for a non-root user on the Ubuntu 24.04 LTS Linux distribution. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see our [Users and Groups](https://www.linode.com/docs/guides/linux-users-and-groups/) doc. {{< /note >}} @@ -54,15 +54,29 @@ This guide is written for a non-root user on the Ubuntu 24.04 LTS Linux distribu ![PyTorch and TensorRT Diagram](PyTorch-TensorRT-Diagram.svg) +1. User connects to the NVIDIA RTX 4000 Ada GPU instance via SSH. + +1. CUDA (Compute Unified Device Architecture) and NVIDIA drivers are installed via CUDA keyring to ensure the latest stable versions are running. + +1. PyTorch, TensorRT, and their dependencies are installed in a Python Virtual Environment (venv) to prevent any conflicts with system-wide packages. + +1. An inferencing script written in Python is created. The script imports a pre-trained AI model (ResNet50) and benchmarks the inference time against sample images to test GPU performance. + +1. The inference script outputs the average time per inference across a number of runs for a given sample image. + ## Deploy an NVIDIA RTX 4000 Ada Instance -Akamai's NVIDIA RTX 4000 Ada GPU instances can be deployed using Cloud Manager or the Linode CLI. +Akamai's NVIDIA RTX 4000 Ada GPU instances can be deployed using Cloud Manager or the Linode CLI. Deployment instructions and technical requirements: + +- **Cloud Manager deployment**: For instructions on deploying a GPU instance via the Cloud Manager, see our [Create a Linode](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance) guide. -- For instructions on deploying a GPU instance via the Cloud Manager, see our [Create a Linode](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance) guide. +- **CLI deployment**: For guidance on deploying a GPU instance using the Linode CLI, see the [Create a Linode](https://techdocs.akamai.com/linode-api/reference/post-linode-instance) section of our API documentation. -- For guidance on deploying a GPU instance using the Linode CLI, see the [Create a Linode](https://techdocs.akamai.com/linode-api/reference/post-linode-instance) section of our API documentation. +- **Distribution**: Select the latest stable Ubuntu version (Ubuntu 24.04 LTS as of this writing) -- For a list of GPU region availability, see our [Choose a Data Center](https://techdocs.akamai.com/cloud-computing/docs/how-to-choose-a-data-center) guide. See our API documentation to see a [region's service availability](https://techdocs.akamai.com/linode-api/reference/get-account-availability) using the Linode API or CLI. +- **Plan type**: All RTX 4000 Ada GPU plan types support the AI inference workload in this guide + +- **Region**: For a list of GPU region availability, see our [Choose a Data Center](https://techdocs.akamai.com/cloud-computing/docs/how-to-choose-a-data-center) guide. See our API documentation to see a [region's service availability](https://techdocs.akamai.com/linode-api/reference/get-account-availability) using the Linode API or CLI. ## Set Up Your Development Environment @@ -182,7 +196,7 @@ Set up and use a Python Virtual Environment (venv) so that you can isolate Pytho Remain in your virtual environment to install PyTorch, TensorRT, and dependencies. These are the primary AI libraries needed to run your inference workloads. ```command {title="(venv)"} -pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 +pip install torch==2.5.1+cu121 torchvision==0.16.1+cu121 torchaudio==2.5.1+cu121 --index-url https://download.pytorch.org/whl/cu121 pip install requests pip install nvidia-pyindex pip install nvidia-tensorrt @@ -199,7 +213,19 @@ Create and run a Python script using a pre-trained ResNet50 computer vision mode nano {{< placeholder "inference_test.py" >}} ``` -1. Copy and insert the following code content into the script. Note the commented descriptions for what each section of code performs: +1. Copy and insert the below code content into the script. In order, the script performs the following actions: + + - Imports the PyTorch framework and its pre-trained models + + - Pulls an example sample image of a dog from PyTorch's GitHub repository + + - Preprocessing for the sample image: Image resizing for compatibility with the ResNet50 model, format conversion for PyTorch, add a "batch dimension" to emulate multiple images, moves the processed data to the GPU + + - Loads the ResNet50 pre-trained model, including a library of images + + - GPU optimization and preparation for benchmarking + + - Runs the inference benchmark 20 times against the ResNet50 AI model ```file {title="inference_test.py"} # import PyTorch, pre-trained models from torchvision and image utilities @@ -267,7 +293,62 @@ Create and run a Python script using a pre-trained ResNet50 computer vision mode Average inference time: 0.0025 seconds ``` - It is recommended to time how long it takes to run the model 20 times, and then divide by 20 to get the average time per inference. This should give you an idea of how quickly your GPU can process input using this model. + The model runs 20 times, and the total inference time is then divided by 20 to get the *average time per inference*. This provides an idea of how quickly your GPU can process input using this model. + +### Accelerate Inferencing with TensorRT + +If you want to accelerate your GPU's inferencing power further, you can use NVIDIA's optimized inference runtime, TensorRT, to deliver faster inference with lower latency. + +1. Add the highlighted line (line 10) to the `import` section at the top of your inference script to import the TensorRT model (`torch_tensorrt`) previously installed: + + ```file {title="inference_test.py" hl_lines="10"} + # import PyTorch, pre-trained models from torchvision and image utilities + + import torch + import torchvision.models as models + import torchvision.transforms as transforms + from PIL import Image + import requests + from io import BytesIO + import time + import torch_tensorrt + ``` + +1. Next, add the highlighted code block (lines 35-51) after the `Load a model` section to load the TensorRT-optimized model: + + ```file {title="inference_test.py" linenostart="31" hl_lines="5-21"} + # Load a model (ResNet50) pretrained on the ImageNet dataset containing millions of images + + model = models.resnet50(pretrained=True).cuda().eval() + + # Compile with TensorRT + model = torch_tensorrt.compile( + model, + inputs=[torch_tensorrt.Input(input_tensor.shape)], + enabled_precisions={torch.float} + ) + + # Benchmark TensorRT Inference + for _ in range(5): + _ = model_trt(input_tensor) + + start = time.time() + with torch.no_grad(): + for _ in range(20): + _ = model_trt(input_tensor) + end = time.time() + print(f" TensorRT average inference time: {(end - start) / 20:.4f} seconds") + ``` + + Save your changes when complete. + +1. Run the script again, and compare the `Average inference time` output to that of your PyTorch results: + + ```output + Average inference time: + ``` + +As you scale your inference, using TensorRT can help keep your model smaller and more performant. ## Next Steps From 3a68b0dd43803efb46a158558d1087e464c95fda Mon Sep 17 00:00:00 2001 From: jddocs Date: Wed, 16 Jul 2025 11:28:39 -0400 Subject: [PATCH 6/8] copy edit --- .../big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md index 809f54ebc0a..96eb6d4fa3a 100644 --- a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md @@ -203,7 +203,7 @@ pip install nvidia-tensorrt pip install torch-tensorrt -U ``` -## Test and Benchmark the ResNet50 Inference Model +## Run and Benchmark the ResNet50 Inference Model Create and run a Python script using a pre-trained ResNet50 computer vision model. Running this script tests to make sure the environment is configured correctly while providing a way to evaluate GPU performance using a real-world example. This example script is a foundation that can be adapted for other inference model architectures. From 7158279a3806de14ac292a514f9f5ee7fcd6ee2f Mon Sep 17 00:00:00 2001 From: jddocs Date: Mon, 21 Jul 2025 11:45:26 -0400 Subject: [PATCH 7/8] copy edit, add CV and inferencing sections --- .../index.md | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md index 96eb6d4fa3a..1b4bb9384e2 100644 --- a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md @@ -27,6 +27,16 @@ The primary AI model used in this guide is a ResNet50 computer vision (CV) model In some cases, a $100 deposit may be required to deploy GPU Linodes. This may include new accounts that have been active for less than 90 days and accounts that have spent less than $100 on services. If you are unable to deploy GPU Linodes, contact [Support](https://www.linode.com/support/) for assistance. {{< /note >}} +## AI Inferencing + +### What is AI Inference? + + + +### What is Computer Vision (CV)? + +Computer vision is a type of artificial intelligence that interprets images and outputs physical information about what is detected in the image. This guide uses a CV model (ResNet50) that uses a specific pre-trained set of images and runs an example image against that set. In this example, inferencing occurs when the CV model returns information about the example image (i.e. "this is a picture of a dog") based on its pre-trained knowledge base of millions of sample images. + ## What are TensorRT and PyTorch? ### TensorRt @@ -217,7 +227,7 @@ Create and run a Python script using a pre-trained ResNet50 computer vision mode - Imports the PyTorch framework and its pre-trained models - - Pulls an example sample image of a dog from PyTorch's GitHub repository + - Pulls an example sample image of a dog from PyTorch's GitHub repository on which to run AI inference - Preprocessing for the sample image: Image resizing for compatibility with the ResNet50 model, format conversion for PyTorch, add a "batch dimension" to emulate multiple images, moves the processed data to the GPU From 640f534a34559df0c3cb0a960fd2d87729ddb669 Mon Sep 17 00:00:00 2001 From: jddocs Date: Fri, 8 Aug 2025 13:34:47 -0400 Subject: [PATCH 8/8] added ai inference definition, edited release date --- .../ai-inferencing-with-tensorrt-and-pytorch/index.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md index 1b4bb9384e2..5313f17bf5a 100644 --- a/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md +++ b/docs/guides/applications/big-data/ai-inferencing-with-tensorrt-and-pytorch/index.md @@ -4,7 +4,7 @@ title: "Build an AI Inferencing Solution With TensorRt and PyTorch" description: "Enhance deep learning capabilities with TensorRT and PyTorch on Akamai Cloud. Optimize inferencing for various AI models using NVIDIA RTX 4000 Ada GPU instances." authors: ["Akamai"] contributors: ["Akamai"] -published: 2025-06-27 +published: 2025-08-08 keywords: ['ai','inference','inferencing','llm','model','pytorch','tensorrt','gpu','nvidia'] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' external_resources: @@ -31,7 +31,11 @@ In some cases, a $100 deposit may be required to deploy GPU Linodes. This may in ### What is AI Inference? +AI inference occurs after model training - it’s the point at which the AI model generates an “opinion” or decision based on how it was trained. Think of inference like how people have the ability to form a point of view based on prior knowledge and experience. +Consider an AI model trained on a data set that includes millions of images of dogs. If given a new image of a dog not in the data set, the AI model uses inference to determine information about the new dog (i.e. the dog’s breed). + +The goal of AI inference is to generate an educated, accurate result from a well-trained model with speed and efficiency. ### What is Computer Vision (CV)?