0

Hey I have a job running on a gcp VM where some imported libraries are logging to stdout with the print or python logging library, and I also have written a lot of code where the output is logged to stdout with the python logging library (e.g. logging.info("*** INFO ***") ). I'm running into an issue where the logs are printing to STDOUT but not sent to GCP cloud logging. I have tried so many things such as installing the logging agent on the VM, going through a bunch of GCP docs, using deprecated logging libraries, etc, but nothing has worked. Here are some examples:

gcp_log_example.py - The 'HI' log get sent to GCP cloud logging but none of the logs in builtin_logger.py. This is why I cannot use this approach.

import subprocess
import google.cloud.logging
import logging

client = google.cloud.logging.Client()
client.setup_logging()
logging.info('HI')

subprocess.run('python3 builtin_logger.py', shell=True)

^^ I will not use this approach because I'm pretty sure it requires me to go through every library and integrate GCP logging. No way anyone is going to do that.

builtin_logger.py

import logging
logging.basicConfig(level=logging.INFO)
print('*** print! ***')
logging.info('*** INFO! ***')
logging.warning('*** WARNING! ***')
logging.critical('*** CRITICAL! ***')
sys.stdout.flush()

^ These logs get sent to stdout but are not sent to GCP.

I've also tried running it like this but in this case the severity level is not captured but the logs are sent to GCP cloud logging (twice though --- the logs are duplicated)

tmp.sh

#!/bin/bash

python3 builtin_log.py 2>&1 | logger -t "builtin_log.py"
#!/bin/bash

sudo docker-compose up --build | logger 2>&1

^ this is basically what I want but the only problem with this is it doesn't capture the severity.

tl;dr: I need to get logs from STDOUT on GCP VM sent to GCP cloud logging. Is this even the right approach?

Help is greatly appreciated, thanks!

New contributor
Matt Elgazar is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
16
  • If you installed Google logging and the stdout is going to syslog, then the logging agent is not configured correctly. Edit your post and include /etc/google-fluentd/config.d/syslog.conf. More details here: cloud.google.com/logging/docs/agent/logging/… Dec 1 at 22:16
  • Perhaps your VM cannot write to Cloud Logging. Add the output from this command to your post: curl --silent --connect-timeout 1 -f -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/scopes. Run the command from the VM OS. Dec 1 at 22:23
  • There are two agents for logging. Ops Agent and the legacy logging agent. My first comment is for the legacy logging agent, which is still used a lot. If you are using Ops Agent, a logging receiver must be correctly configured. More details here: cloud.google.com/logging/docs/agent/ops-agent/… Dec 1 at 22:26
  • Hey thanks @JohnHanley, but unfortunately even after creating a new VM / installing ops agent I'm unable to see logs from STDOUT in gcp cloud logging. The simplest .py script logging.info('hi') doesn't work. Following here: cloud.google.com/stackdriver/docs/solutions/agents/ops-agent/… - it works if I use pip google-cloud-logging library. This isn't an option since 3rd party packages do not use google-cloud-logging. The cmd in your 2nd comment shows it's working, and this command shows everything is running properly: sudo systemctl status google-cloud-ops-agent"*" Dec 2 at 19:04
  • I don't have any more ideas. I do not have a similar problem with any of my VMs. It appears that the issue is with the OS logging. Where are the logs from your program going and what is the logging level? Once you know where the log entries are going, you can configure Google Ops Agent to log the contents of that file or adjust the capture level. Dec 2 at 23:20

0

You must log in to answer this question.

Browse other questions tagged .