英文:
Spark SQL cannot use current date in subquery
问题
我有一个将日志发布到Spark数据库的流程:
我尝试通过在查询中提供当前日期来使SQL更加动态,但当我这样做时,它返回0:
有人可以帮忙吗?
英文:
I have a process which posts logs into a Spark database:
I'm trying to make the SQL more dynamic by supplying the current date in the query like so, but when I do it returns 0:
Running the subquery on its own works:
Can anyone help please?
答案1
得分: 0
查询的问题在于你在子查询生成的模式周围添加了单引号。这意味着 like
运算符只会匹配 batch_insert_Date
列以单引号开头,后跟子查询生成的日期前缀的行。要解决这个问题,你可以从子查询生成的模式中移除单引号。以下是如何修改查询的示例:
SELECT * FROM my_table
WHERE batch_insert_Date LIKE
(SELECT CONCAT(cast(date_format(current_date(),'yyyyMMdd') as string), '%') AS date_col)
在这个修改后的查询中,使用 CONCAT()
函数将子查询生成的日期前缀与 %
通配符字符连接在一起,该通配符字符匹配任意数量的字符。然后使用 LIKE
运算符将 batch_insert_Date
列与子查询生成的模式匹配。
这个查询应该返回所有符合条件的行,其中 batch_insert_Date
列以 yyyyMMdd
格式的当前日期开头。
英文:
The issue with the query is that you are adding single quotes around the pattern generated by the subquery. This means that the like
operator will only match rows where the batch_insert_Date
column starts with a single quote, followed by the date prefix generated by the subquery. To fix this issue, you can remove the single quotes from the pattern generated by the subquery. Here's an example of how you can modify the query to do this:
SELECT * FROM my_table
WHERE batch_insert_Date LIKE
(SELECT CONCAT(cast(date_format(current_date(),'yyyyMMdd') as string), '%') AS date_col)
In this modified query, the CONCAT()
function is used to concatenate the date prefix generated by the subquery with the %
wildcard character, which matches any number of characters. The LIKE
operator is then used to match the batch_insert_Date
column with the pattern generated by the subquery.
This query should return all rows from the table where the batch_insert_Date
column starts with the current date in the yyyyMMdd
format.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论