Technical manual
...
iR Engine on AWS
Client files
5 min
ways of serving client files in production there are multiple ways to serve built client files in a production environment you should decide how you want to serve them now, because a few later steps will be affected by that choice, and changing your aws configuration after everything has been set up one way is a little tricky from client pods (separate from api pods) from api pods from the storage provider, such as s3/cloudfront serve client files from client pods this is the default method the helm charts assume that the deployment will have client pods to serve client files, and the client ingress will point traffic to the client pods the client url will be pointed to the eks load balancer, and you will not need a separate client certificate this option gives you slightly more flexibility in scaling a deployed cluster than serving from the api pods, since you can scale the number of api and client pods independently note that, as of this writing, this is tentatively going to be deprecated by a future re architecture of how projects are built and served, and serving from the storage provider may end up being the only allowable option serve client files from api pods this will make your builder build and serve the client service from the api pods the helm chart will not have a client deployment, serviceaccount, configmap, etc , and the client ingress will point to the api pods the client url will be pointed to the eks load balancer, and you will not need a separate client certificate to enable this, set client servefromapi to true in your helm config file when you are configuring it this needs to be applied to both the builder deployment and the main deployment, but if you set this before deploying anything, it will be applied to both this option can save you some money by requiring fewer nodes in order to host all of the api+client pods you desire, as you do not need capacity for separate client pods it offers slightly less flexibility in scaling since you cannot scale the number of api and client pods separately; more client capacity would require more api capacity, and vice versa it also will result in slightly longer deployment times, as the combined api+client docker images are larger than an api only or client only image (though smaller than the sum of the two separate images), which will mean a few more seconds to download to each node note that, as of this writing, this is tentatively going to be deprecated by a future re architecture of how projects are built and served, and serving from the storage provider may end up being the only allowable option serve client files from storage provider (s3 + cloudfront) this will make the client build process push all of its built files to s3 and serve them via cloudfront static resources will also be served from the client domain instead of a separate resources domain the client url will be pointed to the cloudfront distribution, not the eks load balancer; only api and instanceserver traffic will go to the eks cluster you will need a separate client certificate, but you will not need a resources domain certificate as of this writing, only amazon s3/cloudfront is supported as a storage provider in a cloud environment to enable this, set builder extraenv serve client from storage provider to true in the helm config file when you are configuring it also make sure that builder extraenv storage provider is set to s3 this option can greatly speed up the time it takes for users to fully load your worlds, since every client file can be served from a cdn close to them, rather than having to fetch them all from the closest physical server it will also slightly speed up build times and deployment times since the client build does not need to be pushed to a docker repo (though a cache of the build will still be pushed to speed up future builds)