方法 fit ALSModel 未找到

huangapple go评论94阅读模式
英文:

method fit ALSModel not found

问题

以下是您提供的代码的翻译部分:

  1. 我对Spark还很陌生我想从数据库中读取数据并向特定用户推荐产品
  2. 我找到了以下代码
  3. public class CollaborativeFiltering {
  4. public static void main(String[] args) {
  5. // 步骤 1:设置Spark环境
  6. SparkSession spark = SparkSession.builder()
  7. .appName("CollaborativeFiltering")
  8. .master("local[*]")
  9. .getOrCreate();
  10. // 步骤 2:配置数据库连接并将评分数据加载到DataFrame中
  11. String url = "jdbc:mysql://localhost:3306/your_database"; // 用您的数据库URL替换
  12. String table = "ratings"; // 用您的表名替换
  13. String user = "your_username"; // 用您的数据库用户名替换
  14. String password = "your_password"; // 用您的数据库密码替换
  15. DataFrameReader reader = spark.read().format("jdbc");
  16. Dataset<Row> ratingsDF = reader.option("url", url)
  17. .option("dbtable", table)
  18. .option("user", user)
  19. .option("password", password)
  20. .load();
  21. // 步骤 3:为协同过滤准备数据
  22. Dataset<Row> preparedData = ratingsDF.withColumnRenamed("user_id", "userId")
  23. .withColumnRenamed("product_id", "itemId");
  24. // 步骤 4:构建协同过滤模型
  25. ALS als = new ALS();
  26. // 设置必需的参数
  27. als.setUserCol("userId");
  28. als.setItemCol("itemId");
  29. als.setRatingCol("rating");
  30. // 设置其他可选参数
  31. als.setRank(10); // 设置潜在因子的数量
  32. als.setMaxIter(10); // 设置最大迭代次数
  33. als.setRegParam(0.01); // 设置正则化参数
  34. ALSModel model = als.fit(preparedData);
  35. // 步骤 5:为特定用户生成推荐
  36. int userId = 123; // 用所需的用户ID替换
  37. Dataset<Row> userRecommendations = model.recommendForUserSubset(spark.createDataset(Collections.singletonList(userId), Encoders.INT), 5); // 获取前5个推荐
  38. // 打印推荐
  39. userRecommendations.show(false);
  40. // 停止Spark会话
  41. spark.stop();
  42. }}

希望这可以帮助您。如果您遇到setMaxItersetRegParamfit方法找不到的问题,请确保您的依赖项和版本与Spark 3.3.0和Scala 2.13兼容。如果问题仍然存在,请提供更多关于您的环境和错误消息的信息,以便更进一步的帮助。

英文:

i'm new to spark, i want to read data from database and recommand product to a specific user.
i found the following code

  1. public class CollaborativeFiltering {
  2. public static void main(String[] args) {
  3. // Step 1: Set up Spark environment
  4. SparkSession spark = SparkSession.builder()
  5. .appName(&quot;CollaborativeFiltering&quot;)
  6. .master(&quot;local[*]&quot;)
  7. .getOrCreate();
  8. // Step 2: Configure database connection and load ratings data into DataFrame
  9. String url = &quot;jdbc:mysql://localhost:3306/your_database&quot;; // Replace with your database URL
  10. String table = &quot;ratings&quot;; // Replace with your table name
  11. String user = &quot;your_username&quot;; // Replace with your database username
  12. String password = &quot;your_password&quot;; // Replace with your database password
  13. DataFrameReader reader = spark.read().format(&quot;jdbc&quot;);
  14. Dataset&lt;Row&gt; ratingsDF = reader.option(&quot;url&quot;, url)
  15. .option(&quot;dbtable&quot;, table)
  16. .option(&quot;user&quot;, user)
  17. .option(&quot;password&quot;, password)
  18. .load();
  19. // Step 3: Prepare data for collaborative filtering
  20. Dataset&lt;Row&gt; preparedData = ratingsDF.withColumnRenamed(&quot;user_id&quot;, &quot;userId&quot;)
  21. .withColumnRenamed(&quot;product_id&quot;, &quot;itemId&quot;);
  22. // Step 4: Build collaborative filtering model
  23. ALS als = new ALS();
  24. // Set the required parameters
  25. als.setUserCol(&quot;userId&quot;);
  26. als.setItemCol(&quot;itemId&quot;);
  27. als.setRatingCol(&quot;rating&quot;);
  28. // Set additional optional parameters
  29. als.setRank(10); // Set the number of latent factors
  30. als.setMaxIter(10); // Set the maximum number of iterations
  31. als.setRegParam(0.01); // Set the regularization parameter
  32. ALSModel model = als.fit(preparedData);
  33. // Step 5: Generate recommendations for a specific user
  34. int userId = 123; // Replace with the desired user ID
  35. Dataset&lt;Row&gt; userRecommendations = model.recommendForUserSubset(spark.createDataset(Collections.singletonList(userId), Encoders.INT), 5); // Get top 5 recommendations
  36. // Print the recommendations
  37. userRecommendations.show(false);
  38. // Stop the Spark session
  39. spark.stop();
  40. }}

but the methods setMaxIter , setRegParam and fit are not founds.
any help please.
PS : i'm using spark version 3.3.0 and scala version 2.13, i've tried other versions but it's always the same problem.

答案1

得分: 1

问题已解决,通过将版本更改为以下内容:

  1. <scala.version>2.12</scala.version>
  2. <spark.version>3.2.0</spark.version>
英文:

problem solved by changing versions to those

  1. &lt;scala.version&gt;2.12&lt;/scala.version&gt;
  2. &lt;spark.version&gt;3.2.0&lt;/spark.version&gt;

huangapple
  • 本文由 发表于 2023年5月24日 17:47:40
  • 转载请务必保留本文链接:https://go.coder-hub.com/76322184.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定