Technical manual
...
Debugging
Debugging deployed instanceservers (and other Kubernetes pods)
1 min
because of the nature of kubernetes, logs of fatal errors on instanceserver or api pods can sometimes disappear before one has a chance to view them, as the pods that they were on are deleted, along with their logs one way to catch these errors is to tail the logs of existing pods from a local machine and then trigger the error the tail of the logs will persist in your terminal even after the pod has been deleted you should already have kubectl set up and pointing to your cluster, but if not, do so (see here /manual/modules/infrastructure/devopsdeployment/managingremotekubernetes for links to do that) make sure you don't have a browser tab with the offending location(s) open already, as you want to be tailing the logs before the instance starts next, run kubectl get gs if the cluster is fully installed, this will get all of the running instanceserver pods ( kubectl get pods will get all pods, if you need to find the names of api pods, etc ) select the name of a pod and copy it (in linux, highlight it and press ctrl+shift+c), then run kubectl logs \<pod name> c \<release name> instanceserver f , e g kubectl logs prod instanceserver vhwh2 9vqrv c prod instanceserver f it should output something like this for and instanceserver pod \> @ir engine/packages/instanceserver\@1 3 0 start \> cross env app env=production ts node swc src/index ts 👾 bitecs resizing all data stores from 100000 to 5000 powered by three quarks https //quarks art/ \[hyperflux\ action] added topic default \[hyperflux\ state] registerstate scenestate \[hyperflux\ action] added receptor engineeventreceptor \[hyperflux\ state] registerstate enginestate \[hyperflux\ state] registerstate serverstate tue, 11 jul 2023 00 38 50 gmt koa deprecated support for generators will be removed in v3 see the documentation for examples of how to convert old middleware https //github com/koajs/koa/blob/master/docs/migration md at / /node modules/@feathersjs/koa/lib/index js 52 27 \[00 38 50 631] info starting app component "server core\ sequelize" \[hyperflux\ state] registerstate networkstate \[00 38 50 645] info starting app component "server core\ mysql" \[00 38 50 900] info registered kickcreatedlistener component "instanceserver\ channels" \[00 38 50 901] info starting instanceserver with no https on 3031, if you meant to use https try 'sudo bash generate certs' component "instanceserver" \[00 38 51 036] info feathers sync started component "server core" \[00 38 51 634] info server ready component "server core\ sequelize" since the instanceserver pod that is picked to handle a given world or media instance is random, you'll want to open a few more tabs in your terminal and repeat the above kubectl logs command, substituting a different instanceserver pod name in each tab, so that you're tailing all of the current pods then go to the location that is displaying problematic behavior, or otherwise trigger the action that is causing problems, and you should see the error in one of the terminals if it's a fatal error, the logging will end with the pod, but the logs will stay in that terminal note that if you want to log further errors, you may need to get the names of the new pods that are spun up to replace the ones that crashed by running kubectl get gs or kubectl get pods again, and then using the new pods' names in kubectl logs commands