英文:
Where should I add logging in my Go web scraper?
问题
我正在编写一套网站爬虫函数。每个函数都会读取一个HTML文档并返回一个单一的值。为了将所有这些函数联系在一起,我有一个函数 - 让我们称之为ScrapeUrl
,它接受并读取一个URL,然后根据套件中每个爬虫函数的结果构建一个结构实例。
我想要为此添加日志记录,以便在爬虫函数返回非关键值时能够看到。但是我不知道日志记录器应该放在哪里 - 我应该从以下哪个位置记录:
- 在每个爬虫函数内部?
- 基于
ScrapeUrl
函数的返回值,在ScrapeUrl
函数内部记录?
我有一种感觉是第二种情况,但我对Go提供的全局日志记录器不太熟悉。相反,我习惯使用具名日志记录器。
谢谢。
英文:
I'm writing a suite of website scraper functions. Each function reads an HTML document and returns a single value. To tie this all together, I've got a function - let's call it ScrapeUrl
that accepts and reads a URL, then builds a struct instance out of the results from each of the scraper functions in the suite.
I want to add logging to this so that I can see when non-critical values from the scraper functions are missing. But I don't know where the logger would slide in - should I log from:
- Inside each scraper function?
- Inside the
ScrapeUrl
function, based on the return value?
I have a feeling it's #2 but I'm not familiar w/ global loggers like what Go offers. Instead, I'm used to named loggers.
Thanks
答案1
得分: 1
你可以使用一个命名的日志记录器,比如github.com/golang/glog,来记录两者,但只在需要时输出你想要的内容。
英文:
You can use a named logger such as github.com/golang/glog to log both, but only output what you want when you need it.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论