• ShardingSphere生产实战


    ShardingSphere生产实战

    由于业务发展,数据库中某几张核心表未来半年单表数据能达到20亿左右。也因为外部的因素,只能用mysql来存储数据。所以考虑分库分表,最终选型为ShardingSphere

    方案选型可见文章:Mysql大数据量解决方案

    考虑拆分后数据不会绝对均匀,按前期每表存400万来算,大约需512张表。8个库,那每库有64张表。团队采用的单分片键,按什么分片可自行约定,比如按地区,按业务标识等。

    配置的分片策略为:

    • 库:sharding_column % 8

    • 表:sharding_column / 8 % 64

    本地用了两个版本测试,线上用的是5.2.0版本。

    ShardingSphere 5.0.0-alpha

    5.0.0-alpha 和 5.2.0配置文件有差异。

    配置文件中common项可以配置数据源的公共配置。5.2.0版本公共项只能在每个数据源下都配置一遍。

    maven版本:

    1. <shardingsphere-starter.version>5.0.0-alpha</shardingsphere-starter.version>
    2. <dependency>
    3.     <groupId>org.apache.shardingsphere</groupId>
    4.     <artifactId>shardingsphere-jdbc-core-spring-boot-starter</artifactId>
    5.     <version>${shardingsphere-starter.version}</version>
    6. </dependency>
    7.             
    8. <!-- druid -->
    9. <dependency>
    10.     <groupId>com.alibaba</groupId>
    11.     <artifactId>druid</artifactId>
    12.     <version>1.1.22</version>
    13. </dependency>

    application.yml:

    1. spring:
    2.   shardingsphere:
    3.     datasource:
    4.       common:
    5.         type: com.alibaba.druid.pool.DruidDataSource
    6.         driver-class-name: com.mysql.cj.jdbc.Driver
    7.         initial-size: 6
    8.         maxActive: 20
    9.         # 配置获取连接等待超时的时间
    10.         maxWait: 60000
    11.         # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
    12.         timeBetweenEvictionRunsMillis: 60000
    13.         # 配置一个连接在池中最小生存的时间,单位是毫秒
    14.         minEvictableIdleTimeMillis: 300000
    15.         #Oracle需要打开注释
    16.         #validationQuery: SELECT 1 FROM DUAL
    17.         testWhileIdle: true
    18.         testOnBorrow: false
    19.         testOnReturn: false
    20.         # 打开PSCache,并且指定每个连接上PSCache的大小
    21.         poolPreparedStatements: true
    22.         maxPoolPreparedStatementPerConnectionSize: 20
    23.         # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙
    24.         filters: stat,wall,slf4j
    25.         # 通过connectProperties属性来打开mergeSql功能;慢SQL记录
    26.         connectionProperties: druid.stat.mergeSql\=true;druid.stat.slowSqlMillis\=5000
    27.         wall:
    28.           multi-statement-allow: true
    29.       names: ds0, ds_1, ds_2, ds_3, ds4, ds_5, ds_6, ds_7
    30.       ds0:
    31.         type: com.alibaba.druid.pool.DruidDataSource
    32.         driverClassName: com.mysql.cj.jdbc.Driver
    33.         url: jdbc:mysql://127.0.0.1/test?autoReconnect=true&useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8
    34.         username: root
    35.         password: root
    36.       ds1:
    37.         ......
    38.       省略
    39.     rules:
    40.       sharding:
    41.         tables:
    42.           # 逻辑表名
    43.           t_record:
    44.             actualDataNodes: ds_$->{0..7}.t_record_$->{0..63}
    45.             # 配置表分片策略
    46.             tableStrategy:
    47.               standard:
    48.                 shardingColumn: record_id
    49.                 shardingAlgorithmName: t-record-inline
    50.             keyGenerateStrategy:
    51.               column: id
    52.               keyGeneratorName: snowflake
    53.         defaultShardingColumn: record_id
    54.         #绑定表
    55.         bindingTables:
    56.           - t_record
    57.         defaultTableStrategy:
    58.           none:
    59.         defaultDatabaseStrategy:
    60.           standard:
    61.             shardingColumn: record_id
    62.             shardingAlgorithmName: database-inline
    63.         #分片算法配置
    64.         sharding-algorithms:
    65.           t-record-inline:
    66.             type: INLINE
    67.             props:
    68.               algorithm-expression: t_record_$->{record_id.intdiv(8) % 64}
    69.           database-inline:
    70.             type: INLINE
    71.             props:
    72.               algorithm-expression: ds_$->{record_id % 8}
    73.         default-key-generate-strategy:
    74.           column: id
    75.           key-generator-name: snowflake
    76.         key-generators:
    77.           snowflake:
    78.             type: SNOWFLAKE
    79.             props:
    80.               worker-id: 123
    81.     props:
    82.       sql-show: true

    配置Druid监控代码:

    1. @Configuration
    2. public class DruidConfig {
    3.     @Bean
    4.     public ServletRegistrationBean statViewServlet() {
    5.         ServletRegistrationBean bean = new ServletRegistrationBean(new StatViewServlet(), "/druid/*");
    6.         Map<StringString> initParams = new HashMap<>();
    7.         initParams.put("loginUsername""root");
    8.         initParams.put("loginPassword""root");
    9.         //initParams.put("", true);
    10.         //默认就是允许所有访问
    11.         initParams.put("allow""127.0.0.1");
    12.         //黑名单IP
    13.         initParams.put("deny""192.168.1.1");
    14.         bean.setInitParameters(initParams);
    15.         return bean;
    16.     }
    17.     @Bean
    18.     public FilterRegistrationBean webStatFilter() {
    19.         WebStatFilter webStatFilter = new WebStatFilter();
    20.         FilterRegistrationBean<WebStatFilter> filterRegistrationBean = new FilterRegistrationBean<>(webStatFilter);
    21.         filterRegistrationBean.setUrlPatterns(Arrays.asList("/*"));
    22.         filterRegistrationBean.addInitParameter("exclusions""*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
    23.         return filterRegistrationBean;
    24.     }
    25. }

    ShardingSphere 5.2.0线上配置

    maven版本:

    1. <shardingsphere-starter.version>5.2.0</shardingsphere-starter.version>
    2. <dependency>
    3.     <groupId>org.apache.shardingsphere</groupId>
    4.     <artifactId>shardingsphere-jdbc-core-spring-boot-starter</artifactId>
    5.     <version>${shardingsphere-starter.version}</version>
    6. </dependency>
    7.             
    8. <!-- druid -->
    9. <dependency>
    10.     <groupId>com.alibaba</groupId>
    11.     <artifactId>druid</artifactId>
    12.     <version>1.1.18</version>
    13. </dependency>

    application.yml:

    1. spring:
    2.   shardingsphere:
    3.     datasource:
    4.       names: ds0, ds_1, ds_2, ds_3, ds4, ds_5, ds_6, ds_7
    5.       ds_0:
    6.         type: com.alibaba.druid.pool.DruidDataSource
    7.         driverClassName: com.mysql.cj.jdbc.Driver
    8.         url: jdbc:mysql://127.0.0.1/prod?autoReconnect=true&useSSL=false&serverTimezone=Asia/Shanghai&characterEncoding=utf8
    9.         username: root
    10.         password: root
    11.         maxWait: 60000
    12.         maxActive: 100
    13.         validationQuery: SELECT 1
    14.         testWhileIdle: true
    15.         testOnBorrow: true
    16.         timeBetweenEvictionRunsMillis: 300000
    17.         minEvictableIdleTimeMillis: 3600000
    18.         useUnfairLock: true
    19.       ds_1:
    20.         ......
    21.       省略
    22.     rules:
    23.       sharding:
    24.         tables:
    25.           # 逻辑表名
    26.           t_record:
    27.             actualDataNodes: ds_${0..7}.t_record_${0..63}
    28.             # 配置表分片策略
    29.             tableStrategy:
    30.               standard:
    31.                 shardingColumn: record_id
    32.                 shardingAlgorithmName: t-record-inline
    33.             keyGenerateStrategy:
    34.               column: id
    35.               keyGeneratorName: UUID
    36.         defaultShardingColumn: record_id
    37.         #绑定表
    38.         bindingTables:
    39.           - t_record
    40.         defaultTableStrategy:
    41.           none:
    42.         defaultDatabaseStrategy:
    43.           standard:
    44.             shardingColumn: record_id
    45.             shardingAlgorithmName: database-inline
    46.         #分片算法配置
    47.         sharding-algorithms:
    48.           t-record-inline:
    49.             type: INLINE
    50.             props:
    51.               algorithm-expression: t_record_${record_id.intdiv(8) % 64}
    52.           database-inline:
    53.             type: INLINE
    54.             props:
    55.               algorithm-expression: ds_${record_id % 8}
    56.         default-key-generate-strategy:
    57.           column: id
    58.           key-generator-name: UUID
    59.         key-generators:
    60.           UUID:
    61.             type: UUID
    62.     props:
    63.       sql-show: false

    Groovy中相除结果为浮点数

    record_id/8有时会出现0.5的结果,则表名为t_record_0.5,会报错。

    原因是Groovy不提供专用的整数除法运算符符号,需将表达式中 record_id/8 修改为 record_id.intdiv(8)

    在application.yml 下加入Druid监控配置会报错

    1. spring:
    2.   shardingsphere:
    3.     datasource:
    4.       names: ds0, ds_1, ds_2, ds_3, ds4, ds_5, ds_6, ds_7
    5.       ds_0:
    6.         filters: stat

    可见github上issues:https://github.com/apache/shardingsphere/issues/21211

    1. 报错
    2. Caused by: org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:com.alibaba.druid.filter.stat.StatFilter; exception=Class is not accepted: com.alibaba.druid.filter.stat.StatFilter
    3.  in 'string', line 76, column 5:
    4.       - !!com.alibaba.druid.filter.stat. ... 

    5.2.0自动审计配置不生效,配置审计是为了拦截没有走分片键的sql,避免全路由。

    排查后是因为 ShardingSphereAutoConfiguration 自动配置没有把审计的实现类注入。于是乎本地手动注入,具体实现为配置一个后置处理器BeanPostProcessor 即可。

    具体案例可参考之前文章:策略模式、模板模式实战

    1. @Override
    2. public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
    3.     
    4.     //加载审计配置,ShardingRuleSpringBootConfiguration自动配置类未加载,所以手动加载
    5.     if (bean instanceof AlgorithmProvidedShardingRuleConfiguration) {
    6.         PropertySource source = ((ConfigurableEnvironment) environment).getPropertySources().get("applicationConfig: [classpath:/application.yml]");
    7.         Object val = source.getProperty("spring.shardingsphere.rules.sharding.auditors.sharding_key_required_auditor.type");
    8.         //简单实现
    9.         if (val != null) {
    10.             Map<String, ShardingAuditAlgorithm> auditors = new LinkedHashMap<>();
    11.             auditors.put("sharding-key-required-auditor", new DMLShardingConditionsShardingAuditAlgorithm());
    12.             ((AlgorithmProvidedShardingRuleConfiguration) bean).setAuditors(auditors);
    13.         }
    14.     }
    15.     return bean;
    16. }

    5.2.0版本测试问题

    • 5.2.0版本中配置ShardingSphere主键生成策略不生效,与MybatisPlus主键生成策略冲突,猜测原因是5.2.0版本问题。

    所以代码中主键生成用MybatispPlus内置的雪花算法实现。


    • 除了分片表之外,ShardingSphere连接的数据源中,不能有重复的表。ShardingSphere启动时,会把对应的数据源和表的映射关系放在Map中,如果重复,则一个表会对应多个数据源。表关联会报错(必须指定相同的数据源)。


    • 分片表和普通表可以join查询,但必须指定相同的数据源。


    • ShardingSphere支持跨库事务更新,如果是代码逻辑异常则会回滚。

  • 相关阅读:
    护眼灯买什么样的好?好用又实惠的护眼台灯推荐
    vue、js获取页面中所有css样式(包括link标签)案例为打印使用
    Dockerfile的使用-利用docker构建包含jdk ,vim centos
    1.3 Multi ElasticSearch Head插件基本操作
    【AAAI2023】视觉辅助的常识知识获取Visually Grounded Commonsense Knowledge Acquisition 个人学习笔记
    洛谷刷题入门篇:顺序结构
    3-UI自动化-八大元素定位,xpath定位方式和相关的常问面试题
    XML Web 服务 Eclipse实现中的sun-jaxws.xml文件
    Split Into Two Sets Codeforces 1702E
    【继承练习题--多态-- 动态绑定-- 重写】
  • 原文地址:https://blog.csdn.net/u011385940/article/details/127972468