英文:
Comments convention "// $example on:" and "// $example off:" in Scala and Java
问题
我在标准的Spark发行版的“examples”文件夹中找到了这个内容,类似于以下的注释:
// $example on:programmatic_schema$
import org.apache.spark.sql.Row
// $example off:programmatic_schema$
// $example on:init_session$
import org.apache.spark.sql.SparkSession
// $example off:init_session$
// $example on:programmatic_schema$
// $example on:data_types$
import org.apache.spark.sql.types._
// $example off:data_types$
// $example off:programmatic_schema$
object SparkSQLExample {
// $example on:create_ds$
case class Person(name: String, age: Long)
// $example off:create_ds$
真的很难找出它的用途,我怀疑它是用于某种自动文档工具?Java和Scala也是一样的。
英文:
I found it in "examples" folder of standard Spark distribution, comments such as this:
// $example on:programmatic_schema$
import org.apache.spark.sql.Row
// $example off:programmatic_schema$
// $example on:init_session$
import org.apache.spark.sql.SparkSession
// $example off:init_session$
// $example on:programmatic_schema$
// $example on:data_types$
import org.apache.spark.sql.types._
// $example off:data_types$
// $example off:programmatic_schema$
object SparkSQLExample {
// $example on:create_ds$
case class Person(name: String, age: Long)
// $example off:create_ds$
Really hard to find what is it for, I suspect for some auto-documentation tool? Same with Java and Scala.
答案1
得分: 4
Spark使用一个名为include_example.rb
的自定义Jekyll插件来生成他们的文档。这使他们能够在Markdown源文件中使用include_example
标签来包含来自仓库的文件。
该插件包含以下描述:
> # 根据代码中的标签选择行。目前我们使用"$example on$"和"$example off$"作为标签。请注意,由标签标识的代码块不应重叠。
因此,这些注释是为了更好地自动生成他们的文档。
你在问题中展示的文件被包含在getting-started.md中。通过{% include_example create_df scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}.
你可以看到它在Getting Started - Spark 3.0.0 Documentation中完全渲染的样子。
正如你所看到的,他们使用这些标签来剥离每种语言的无关信息/样板代码,仅显示特定部分。不同的标签允许他们选择文件的不同部分。
英文:
Spark uses a custom Jekyll plugin to generate their documentation, called include_example.rb
. This allows them to use the include_example
tag in their Markdown sources to include the file from the repo.
The plugin contains this description:
> # Select lines according to labels in code. Currently we use "$example on$" and "$example off$"
> # as labels. Note that code blocks identified by the labels should not overlap.
Thus, these comments are there so that they can auto-generate their documentation better.
The file you have shown in the question is included in getting-started.md. Via {% include_example create_df scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}.
You can see how this looks fully rendered in Getting Started - Spark 3.0.0 Documentation.
As you can see, they use those tags to strip out irrelevant information/boilerplate of each language, and only show specific bits. The different labels allow them to select different bits of the file.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论