微服务集中式数据库模型

huangapple go评论95阅读模式
英文:

Microservices centralized database model

问题

目前我们有一些微服务,它们有自己的数据库模型和迁移,由GORM Golang包提供。我们有一个庞大的旧MySQL数据库,这与微服务的规则相违背,但我们无法替换它。我担心当微服务数量开始增长时,我们将在众多数据库模型中迷失。当我在微服务中添加一个新列时,我只需在终端中输入service migrate(因为有一个用于运行和迁移命令的CLI),它就会刷新数据库。

如何管理它是最佳实践?例如,我有1000个微服务,当有人刷新模型时,没有人会手动输入service migrate。我考虑使用一个集中式数据库服务,我们只需添加一个新列,它将存储所有模型和迁移。唯一的问题是,服务如何了解数据库模型的更改。以下是我们在一个服务中存储用户的示例代码:

type User struct {
    ID        uint           `gorm:"column:id;not null" sql:"AUTO_INCREMENT"`
    Name      string         `gorm:"column:name;not null" sql:"type:varchar(100)"`
    Username  sql.NullString `gorm:"column:username;not null" sql:"type:varchar(255)"`
}

func (u *User) TableName() string {
    return "users"
}
英文:

Currently we have some microservice, they have their own database model and migration what provided by GORM Golang package. We have a big old MySQL database which is against the microservices laws, but we can't replace it. Im afraid when the microservices numbers start to growing, we will be lost in the many database model. When I add a new column in a microservice I just type service migrate to the terminal (because there is a cli for run and migrate commands), and it is refresh the database.

What is the best practice to manage it. For example I have 1000 microservice, noone will type the service migrate when someone refresh the models. I thinking about a centralized database service, where we just add a new column and it will store all the models with all migration. The only problem, how will the services get to know about database model changes. This is how we store for example a user in a service:

type User struct {
	ID        uint           `gorm:"column:id;not null" sql:"AUTO_INCREMENT"`
	Name      string         `gorm:"column:name;not null" sql:"type:varchar(100)"`
	Username  sql.NullString `gorm:"column:username;not null" sql:"type:varchar(255)"`
}

func (u *User) TableName() string {
	return "users"
}

答案1

得分: 4

根据您的使用情况,MySQL Cluster可能是一个选择。MySQL Cluster使用的两阶段提交使得频繁写入变得不切实际,但如果写入性能不是一个大问题,我认为MySQL Cluster比连接池或队列技巧更好。值得考虑。

英文:

Depending on your use cases, <a href="https://dev.mysql.com/downloads/cluster/">MySQL Cluster</a> might be an option. <a href="https://forums.mysql.com/read.php?25,248914">Two phase commits</a> used by MySQL Cluster make frequent writes impractical, but if write performance isn't a big issue then I would expect MySQL Cluster would work out better than connection pooling or queuing hacks. Certainly worth considering.

答案2

得分: 3

如果我正确理解你的问题,你想要在一个MySQL实例上使用多个微服务。

有几种方法可以使SQL系统工作:

  1. 你可以创建一个处理数据库数据插入/读取的微服务类型,并利用连接池。然后让其他服务通过这些服务进行所有的数据读写操作。这样做肯定会增加写入/读取的延迟,并且在规模上可能会有问题。

  2. 你可以尝试寻找一个多主SQL解决方案(例如CitusDB),它可以轻松扩展,并且你可以使用一个中央模式来管理数据库,只需确保处理数据插入的边缘情况(去重等)。

  3. 你可以使用数据流架构,如KafkaAWS Kinesis,将数据传输到微服务,并确保它们只通过这些流处理数据。这样,你可以将数据库与数据解耦。

在我看来,最好的方法是第3种。这样,你就不必考虑微服务架构的计算层上的存储问题。

不确定你在使用哪个微服务,但StdLib强制进行一些转换(例如,只通过HTTP传输数据),这有助于人们理解所有这些。AWS Lambda也非常适用于使用Kinesis作为启动函数的源,这可以帮助实现第3种方法。

免责声明:我是StdLib的创始人。

英文:

If I'm understanding your question correctly, you're trying to still use one MySQL instance but with many microservices.

There are a couple of ways to make an SQL system work:

  1. You could create a microservice-type that handles data inserts/reads from the database and take advantage of connection pooling. And have the rest of your services do all their data read/writes through these services. This will definitely add a bit of extra latency to all your writes/reads and likely be problematic at scale.

  2. You could attempt to look for a multi-master SQL solution (e.g. CitusDB) that scales easily and you can use a central schema for your database and just make sure to handle edge cases for data insertion (de-deuping etc.)

  3. You can use data-streaming architectures like Kafka or AWS Kinesis to transfer your data to your microservices and make sure they only deal with data through these streams. This way, you can de-couple your database from your data.

The best way to approach it in my opinion is #3. This way, you won't have to think about your storage at the computation layer of your microservice architecture.

Not sure what service you're using for your microservices, but StdLib forces a few conversions (e.g. around only transferring data through HTTP) that helps folks wrap their head around it all. AWS Lambda also works very well with Kinesis as a source to launch the function which could help with the #3 approach.

Disclaimer: I'm the founder of StdLib.

答案3

得分: 2

如果我正确理解你的问题,我认为有多种方法可以实现这个目标。

一种解决方案是在数据库中设置一个模式版本,你的微服务定期检查该版本。当数据库模式发生变化时,你可以增加模式版本。因此,如果一个服务注意到数据库模式版本高于当前服务的模式版本,它可以在代码中迁移模式,gorm允许这样做。

其他选项可能取决于你如何运行你的微服务。例如,如果你使用某个编排平台(如Kubernetes)运行它们,你可以将迁移代码放在某个位置,在服务初始化时运行。然后,一旦更新了模式,你可以强制刷新容器,从而触发迁移。

英文:

If I understand your question correctly it seems to me that there may be multiple ways to achieve this.

One solution is to have a schema version somewhere in the database that your microservices periodically check. When your database schema changes you can increase the schema version. As a result of this if a service notices that the database schema version is higher than the current schema version of the service it can migrate the schema in the code which gorm allows.

Other options could depend on how you run your microservices. For example if you run them using some orchestration platform (e.g. Kubernetes) you could put the migration code somewhere to run when your service initializes. Then once you update the schema you can force a rolling refresh of your containers which would in turn trigger the migration.

huangapple
  • 本文由 发表于 2017年1月25日 19:13:50
  • 转载请务必保留本文链接:https://go.coder-hub.com/41850142.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定