Keycloak集群中的令牌共享不起作用。

huangapple go评论86阅读模式
英文:

Keycloak Token Sharing in Cluster Not working

问题

我在两台不同的主机上运行了两个Keycloak实例(每个Keycloak实例都在一个Docker容器中)。我按照这个仓库中的指南进行设置。我使用JDBC_PING作为发现协议,并且从日志中可以看到两个Keycloak实例已经相互发现。

2023-07-26 19:39:44,839 INFO  [org.infinispan.CLUSTER] (jgroups-7,vm200-9849) [Context=sessions] ISPN100010: Finished rebalance with members [vm200-9849, vm202-18591], topology id 5
2023-07-26 19:39:44,849 INFO  [org.infinispan.CLUSTER] (jgroups-7,vm200-9849) [Context=work] ISPN100002: Starting rebalance with members [vm200-9849, vm202-18591], phase READ_OLD_WRITE_ALL, topology id 2
2023-07-26 19:39:44,850 INFO  [org.infinispan.LIFECYCLE] (jgroups-7,vm200-9849) [Context=work] ISPN100002: Starting rebalance with members [vm200-9849, vm202-18591], phase READ_OLD_WRITE_ALL, topology id 2
2023-07-26 19:39:44,851 INFO  [org.infinispan.LIFECYCLE] (non-blocking-thread--p2-t6) [Context=work] ISPN100010: Finished rebalance with members [vm200-9849, vm202-18591], topology id 2
2023-07-26 19:39:44,872 INFO  [org.infinispan.CLUSTER] (jgroups-9,vm200-9849) [Context=work] ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-07-26 19:39:44,875 INFO  [org.infinispan.CLUSTER] (jgroups-9,vm200-9849) [Context=work] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-07-26 19:39:44,878 INFO  [org.infinispan.CLUSTER] (jgroups-9,vm200-9849) [Context=work] ISPN100010: Finished rebalance with members [vm200-9849, vm202-18591], topology id 5

当我使用不同的浏览器登录两个Keycloak实例(每个浏览器有不同的URL)时,我可以看到两个管理会话。我在一个实例上创建的任何用户/客户端都会反映在另一个实例上。

但是,我无法使用从一个实例生成的REST API授权令牌在另一个实例上工作。我曾以为在集群设置中,不同的Keycloak实例应该能够使用相同的授权令牌,因为我找不到任何特定的文章或文档与此相关。

以下是我的infinispan.xml配置:

<?xml version="1.0" encoding="UTF-8"?>

<infinispan
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
        xmlns="urn:infinispan:config:11.0">

    <jgroups>
        <stack name="postgres-jdbc-ping-tcp" extends="tcp">
            <!-- <TCP external_addr="${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}" />
            <TCP bind_port="7600" /> -->
            <JDBC_PING
                connection_driver="org.postgresql.Driver"
                connection_username="${env.KC_DB_USERNAME}"
                connection_password="${env.KC_DB_PASSWORD}"
                connection_url="jdbc:postgresql://${env.KC_DB_URL_HOST}:${env.KC_DB_URL_PORT:5432}/${env.KC_DB_URL_DATABASE}${env.KC_DB_URL_PROPERTIES:}"
                initialize_sql="CREATE SCHEMA IF NOT EXISTS ${env.KC_DB_SCHEMA:public}; CREATE TABLE IF NOT EXISTS ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, bind_addr varchar(200) NOT NULL, updated timestamp default current_timestamp, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name));"
                insert_single_sql="INSERT INTO ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr, cluster_name, bind_addr, updated, ping_data) values (?, ?, '${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}', NOW(), ?);"
                delete_single_sql="DELETE FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE own_addr=? AND cluster_name=?;"
                select_all_pingdata_sql="SELECT ping_data, own_addr, cluster_name FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE cluster_name=?"
                info_writer_sleep_time="500"
                remove_all_data_on_view_change="true"
                stack.combine="REPLACE"
                stack.position="MPING"
            />
        </stack>
    </jgroups>
    <cache-container name="keycloak">
        <!-- <transport lock-timeout="60000"/> -->
        <transport lock-timeout="60000" stack="postgres-jdbc-ping-tcp"/>
        <local-cache name="realms">
            <encoding>
                <key media-type="application/x-java-object"/>
                <value media-type="application/x-java-object"/>
            </encoding>
            <memory max-count="10000"/>
        </local-cache>
        <local-cache name="users">
            <encoding>
                <key media-type="application/x-java-object"/>
                <value media-type="application/x-java-object"/>
            </encoding>
            <memory max-count="10000"/>
        </local-cache>
        <distributed-cache name="sessions" owners="2">
            <expiration lifespan="-1"/>
        </distributed-cache>
        <distributed-cache name="authenticationSessions" owners="2">
            <expiration lifespan="-1"/>
        </distributed-cache>
        <distributed-cache name="offlineSessions" owners="2">
            <expiration lifespan="-1"/>
        </distributed-cache>
        <distributed-cache name="clientSessions" owners="2">
            <expiration lifespan="-1"/>
        </distributed-cache>
        <distributed-cache name="offlineClientSessions" owners="2">
            <expiration lifespan="-1"/>
        </distributed-cache>
        <distributed-cache name="

<details>
<summary>英文:</summary>

I have two instances of Keycloak running on two different host machines (each keycloak instance is in a docker container). I followed the guide from this [repo](https://github.com/ivangfr/keycloak-clustered).
I use JDBC_PING as the discovery protocol and I can see from the logs that two keycloak instances have discovered each other.

2023-07-26 19:39:44,839 INFO [org.infinispan.CLUSTER] (jgroups-7,vm200-9849) [Context=sessions] ISPN100010: Finished rebalance with members [vm200-9849, vm202-18591], topology id 5
2023-07-26 19:39:44,849 INFO [org.infinispan.CLUSTER] (jgroups-7,vm200-9849) [Context=work] ISPN100002: Starting rebalance with members [vm200-9849, vm202-18591], phase READ_OLD_WRITE_ALL, topology id 2
2023-07-26 19:39:44,850 INFO [org.infinispan.LIFECYCLE] (jgroups-7,vm200-9849) [Context=work] ISPN100002: Starting rebalance with members [vm200-9849, vm202-18591], phase READ_OLD_WRITE_ALL, topology id 2
2023-07-26 19:39:44,851 INFO [org.infinispan.LIFECYCLE] (non-blocking-thread--p2-t6) [Context=work] ISPN100010: Finished rebalance with members [vm200-9849, vm202-18591], topology id 2
2023-07-26 19:39:44,872 INFO [org.infinispan.CLUSTER] (jgroups-9,vm200-9849) [Context=work] ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-07-26 19:39:44,875 INFO [org.infinispan.CLUSTER] (jgroups-9,vm200-9849) [Context=work] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-07-26 19:39:44,878 INFO [org.infinispan.CLUSTER] (jgroups-9,vm200-9849) [Context=work] ISPN100010: Finished rebalance with members [vm200-9849, vm202-18591], topology id 5


When I login to the two keycloak instances using different browsers (each browser has a different URL), I can see that are two admin sessions.
Any users/clients I make on one instance is reflected on the other.
However, I cannot use the REST API Auth token generated from one instance to work on the other. I was under the assumption that in a clustered setup different keycloak instances should be able to work with the same auth token because I could not find any specific articles or documents relating to this.
Here is my infinispan.xml 

<?xml version="1.0" encoding="UTF-8"?>

<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
xmlns="urn:infinispan:config:11.0">

&lt;jgroups&gt;
&lt;stack name=&quot;postgres-jdbc-ping-tcp&quot; extends=&quot;tcp&quot;&gt;
&lt;!-- &lt;TCP external_addr=&quot;${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}&quot; /&gt;
&lt;TCP bind_port=&quot;7600&quot; /&gt; --&gt;
&lt;JDBC_PING
connection_driver=&quot;org.postgresql.Driver&quot;
connection_username=&quot;${env.KC_DB_USERNAME}&quot;
connection_password=&quot;${env.KC_DB_PASSWORD}&quot;
connection_url=&quot;jdbc:postgresql://${env.KC_DB_URL_HOST}:${env.KC_DB_URL_PORT:5432}/${env.KC_DB_URL_DATABASE}${env.KC_DB_URL_PROPERTIES:}&quot;
initialize_sql=&quot;CREATE SCHEMA IF NOT EXISTS ${env.KC_DB_SCHEMA:public}; CREATE TABLE IF NOT EXISTS ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, bind_addr varchar(200) NOT NULL, updated timestamp default current_timestamp, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name));&quot;
insert_single_sql=&quot;INSERT INTO ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr, cluster_name, bind_addr, updated, ping_data) values (?, ?, &#39;${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}&#39;, NOW(), ?);&quot;
delete_single_sql=&quot;DELETE FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE own_addr=? AND cluster_name=?;&quot;
select_all_pingdata_sql=&quot;SELECT ping_data, own_addr, cluster_name FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE cluster_name=?&quot;
info_writer_sleep_time=&quot;500&quot;
remove_all_data_on_view_change=&quot;true&quot;
stack.combine=&quot;REPLACE&quot;
stack.position=&quot;MPING&quot;
/&gt;
&lt;/stack&gt;
&lt;/jgroups&gt;
&lt;cache-container name=&quot;keycloak&quot;&gt;
&lt;!-- &lt;transport lock-timeout=&quot;60000&quot;/&gt; --&gt;
&lt;transport lock-timeout=&quot;60000&quot; stack=&quot;postgres-jdbc-ping-tcp&quot;/&gt;
&lt;local-cache name=&quot;realms&quot;&gt;
&lt;encoding&gt;
&lt;key media-type=&quot;application/x-java-object&quot;/&gt;
&lt;value media-type=&quot;application/x-java-object&quot;/&gt;
&lt;/encoding&gt;
&lt;memory max-count=&quot;10000&quot;/&gt;
&lt;/local-cache&gt;
&lt;local-cache name=&quot;users&quot;&gt;
&lt;encoding&gt;
&lt;key media-type=&quot;application/x-java-object&quot;/&gt;
&lt;value media-type=&quot;application/x-java-object&quot;/&gt;
&lt;/encoding&gt;
&lt;memory max-count=&quot;10000&quot;/&gt;
&lt;/local-cache&gt;
&lt;distributed-cache name=&quot;sessions&quot; owners=&quot;2&quot;&gt;
&lt;expiration lifespan=&quot;-1&quot;/&gt;
&lt;/distributed-cache&gt;
&lt;distributed-cache name=&quot;authenticationSessions&quot; owners=&quot;2&quot;&gt;
&lt;expiration lifespan=&quot;-1&quot;/&gt;
&lt;/distributed-cache&gt;
&lt;distributed-cache name=&quot;offlineSessions&quot; owners=&quot;2&quot;&gt;
&lt;expiration lifespan=&quot;-1&quot;/&gt;
&lt;/distributed-cache&gt;
&lt;distributed-cache name=&quot;clientSessions&quot; owners=&quot;2&quot;&gt;
&lt;expiration lifespan=&quot;-1&quot;/&gt;
&lt;/distributed-cache&gt;
&lt;distributed-cache name=&quot;offlineClientSessions&quot; owners=&quot;2&quot;&gt;
&lt;expiration lifespan=&quot;-1&quot;/&gt;
&lt;/distributed-cache&gt;
&lt;distributed-cache name=&quot;loginFailures&quot; owners=&quot;2&quot;&gt;
&lt;expiration lifespan=&quot;-1&quot;/&gt;
&lt;/distributed-cache&gt;
&lt;local-cache name=&quot;authorization&quot;&gt;
&lt;encoding&gt;
&lt;key media-type=&quot;application/x-java-object&quot;/&gt;
&lt;value media-type=&quot;application/x-java-object&quot;/&gt;
&lt;/encoding&gt;
&lt;memory max-count=&quot;10000&quot;/&gt;
&lt;/local-cache&gt;
&lt;replicated-cache name=&quot;work&quot;&gt;
&lt;expiration lifespan=&quot;-1&quot;/&gt;
&lt;/replicated-cache&gt;
&lt;local-cache name=&quot;keys&quot;&gt;
&lt;encoding&gt;
&lt;key media-type=&quot;application/x-java-object&quot;/&gt;
&lt;value media-type=&quot;application/x-java-object&quot;/&gt;
&lt;/encoding&gt;
&lt;expiration max-idle=&quot;3600000&quot;/&gt;
&lt;memory max-count=&quot;1000&quot;/&gt;
&lt;/local-cache&gt;
&lt;distributed-cache name=&quot;actionTokens&quot; owners=&quot;2&quot;&gt;
&lt;encoding&gt;
&lt;key media-type=&quot;application/x-java-object&quot;/&gt;
&lt;value media-type=&quot;application/x-java-object&quot;/&gt;
&lt;/encoding&gt;
&lt;expiration max-idle=&quot;-1&quot; lifespan=&quot;-1&quot; interval=&quot;300000&quot;/&gt;
&lt;memory max-count=&quot;-1&quot;/&gt;
&lt;/distributed-cache&gt;
&lt;/cache-container&gt;

</infinispan>


I tried changing the number of owners in the infinispan.xml but that did not work.
</details>
# 答案1
**得分**: 0
对于其他人遇到类似问题的情况,问题出在我运行了不同端口上的不同Keycloak实例。
为了验证认证令牌,发行的Keycloak实例必须具有相同的主机名和相同的端口(相同的URL)。[查看文档][1]
一旦我在Keycloak实例前面设置了一个负载均衡器,并将其设置为代理,两个Keycloak实例都指向它,我就可以交替使用它们的认证令牌。
[1]: https://www.keycloak.org/server/hostname#:~:text=When%20deploying%20to%20production%20you%20usually%20want%20a%20consistent%20URL%20for%20the%20frontend%20endpoints%20and%20the%20token%20issuer%20regardless%20of%20how%20the%20request%20is%20constructed
<details>
<summary>英文:</summary>
For anyone else struggling with this, the issue was because I had the different keycloak instances running on different ports. 
In order for the auth Token to be validated, the issuing keycloak instance must have the same hostname and same port (same URL).[See Doc][1]
Once I setup a load balancer in front of keycloak instances and set it up as a proxy with my both my keycloak instances pointing to it, I was able to use their auth tokens interchangeably. 
[1]: https://www.keycloak.org/server/hostname#:~:text=When%20deploying%20to%20production%20you%20usually%20want%20a%20consistent%20URL%20for%20the%20frontend%20endpoints%20and%20the%20token%20issuer%20regardless%20of%20how%20the%20request%20is%20constructed
</details>

huangapple
  • 本文由 发表于 2023年7月27日 21:34:57
  • 转载请务必保留本文链接:https://go.coder-hub.com/76780310.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定