方法 fit ALSModel 未找到

huangapple go评论77阅读模式
英文:

method fit ALSModel not found

问题

以下是您提供的代码的翻译部分:

我对Spark还很陌生我想从数据库中读取数据并向特定用户推荐产品
我找到了以下代码

    public class CollaborativeFiltering {
    public static void main(String[] args) {
        // 步骤 1:设置Spark环境
        SparkSession spark = SparkSession.builder()
                .appName("CollaborativeFiltering")
                .master("local[*]")
                .getOrCreate();

        // 步骤 2:配置数据库连接并将评分数据加载到DataFrame中
        String url = "jdbc:mysql://localhost:3306/your_database"; // 用您的数据库URL替换
        String table = "ratings"; // 用您的表名替换
        String user = "your_username"; // 用您的数据库用户名替换
        String password = "your_password"; // 用您的数据库密码替换
        
        DataFrameReader reader = spark.read().format("jdbc");
        Dataset<Row> ratingsDF = reader.option("url", url)
                .option("dbtable", table)
                .option("user", user)
                .option("password", password)
                .load();

        // 步骤 3:为协同过滤准备数据
        Dataset<Row> preparedData = ratingsDF.withColumnRenamed("user_id", "userId")
                .withColumnRenamed("product_id", "itemId");

        // 步骤 4:构建协同过滤模型
        ALS als = new ALS();

        // 设置必需的参数
        als.setUserCol("userId");
        als.setItemCol("itemId");
        als.setRatingCol("rating");

        // 设置其他可选参数
        als.setRank(10); // 设置潜在因子的数量
        als.setMaxIter(10); // 设置最大迭代次数
        als.setRegParam(0.01); // 设置正则化参数

        ALSModel model = als.fit(preparedData);

        // 步骤 5:为特定用户生成推荐
        int userId = 123; // 用所需的用户ID替换
        Dataset<Row> userRecommendations = model.recommendForUserSubset(spark.createDataset(Collections.singletonList(userId), Encoders.INT), 5); // 获取前5个推荐

        // 打印推荐
        userRecommendations.show(false);
        
        // 停止Spark会话
        spark.stop();
    }}

希望这可以帮助您。如果您遇到setMaxItersetRegParamfit方法找不到的问题,请确保您的依赖项和版本与Spark 3.3.0和Scala 2.13兼容。如果问题仍然存在,请提供更多关于您的环境和错误消息的信息,以便更进一步的帮助。

英文:

i'm new to spark, i want to read data from database and recommand product to a specific user.
i found the following code

public class CollaborativeFiltering {
public static void main(String[] args) {
// Step 1: Set up Spark environment
SparkSession spark = SparkSession.builder()
.appName(&quot;CollaborativeFiltering&quot;)
.master(&quot;local[*]&quot;)
.getOrCreate();
// Step 2: Configure database connection and load ratings data into DataFrame
String url = &quot;jdbc:mysql://localhost:3306/your_database&quot;; // Replace with your database URL
String table = &quot;ratings&quot;; // Replace with your table name
String user = &quot;your_username&quot;; // Replace with your database username
String password = &quot;your_password&quot;; // Replace with your database password
DataFrameReader reader = spark.read().format(&quot;jdbc&quot;);
Dataset&lt;Row&gt; ratingsDF = reader.option(&quot;url&quot;, url)
.option(&quot;dbtable&quot;, table)
.option(&quot;user&quot;, user)
.option(&quot;password&quot;, password)
.load();
// Step 3: Prepare data for collaborative filtering
Dataset&lt;Row&gt; preparedData = ratingsDF.withColumnRenamed(&quot;user_id&quot;, &quot;userId&quot;)
.withColumnRenamed(&quot;product_id&quot;, &quot;itemId&quot;);
// Step 4: Build collaborative filtering model
ALS als = new ALS();
// Set the required parameters
als.setUserCol(&quot;userId&quot;);
als.setItemCol(&quot;itemId&quot;);
als.setRatingCol(&quot;rating&quot;);
// Set additional optional parameters
als.setRank(10); // Set the number of latent factors
als.setMaxIter(10); // Set the maximum number of iterations
als.setRegParam(0.01); // Set the regularization parameter
ALSModel model = als.fit(preparedData);
// Step 5: Generate recommendations for a specific user
int userId = 123; // Replace with the desired user ID
Dataset&lt;Row&gt; userRecommendations = model.recommendForUserSubset(spark.createDataset(Collections.singletonList(userId), Encoders.INT), 5); // Get top 5 recommendations
// Print the recommendations
userRecommendations.show(false);
// Stop the Spark session
spark.stop();
}}

but the methods setMaxIter , setRegParam and fit are not founds.
any help please.
PS : i'm using spark version 3.3.0 and scala version 2.13, i've tried other versions but it's always the same problem.

答案1

得分: 1

问题已解决,通过将版本更改为以下内容:

<scala.version>2.12</scala.version>
<spark.version>3.2.0</spark.version>
英文:

problem solved by changing versions to those

&lt;scala.version&gt;2.12&lt;/scala.version&gt; 
&lt;spark.version&gt;3.2.0&lt;/spark.version&gt;

huangapple
  • 本文由 发表于 2023年5月24日 17:47:40
  • 转载请务必保留本文链接:https://go.coder-hub.com/76322184.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定