• ETL管理工具(shell版本)


    ETL管理工具设计文档

    背景

    ETL job较多,脚本分散,程序版本多,不便于管理。切换数据库步骤繁琐,且容易出问题。缺少一个**统一管理程序的工具**。经过几次切换出现过以下问题,对以下问题深刻思考之后,便有了这个脚本。理论上这个工具支持所有的脚本启动程序.

    目前面临的问题

    1. 有的job不在脚本中,导致漏启停
    2. 有的进程判断已陈旧,无法滤出正确的进程。比如: ps -ef|grep LoadEdcData.jar,而新程序名字为LoadEdcData_OC.jar
    3. 较旧的包版本中缺少proeritity 中后续新增的变量,例如HeartBeat、LoadMachiePauseHis
    4. 代码里面硬编码数据库ip地址,例如QMS 中的queryRunner
    5. 切换需要打两个包的包不同步问题(Report项目)
    6. Job名字不能见名知意,例如以数字区分连的哪个库 job_1 job_2… 应该改成job_eda、job_qms
    7. 连两边的job 如何处理?停程序 置valid_flg
    8. 脚本log分等级显示、做到尽量不需要反复确认
    9. 新增job 如何减少脚本的配置?新增job直接将配置填入 job名字 、端口、
    10. 修改etl-conf-f 是否需要交互?
    11. 脚本要有usage
    12. 进程不在了 是否需要重启. (QMS 、RPT 做一个自动重启脚本 ,如果有K8s 就不需要重启,直接deployment就可以了)

    服务器上的程序分布

    46ETL
    java -Xms3072m -Xmx12288m -jar -Dspring.profiles.active=edanew-eda -Dlog.home=GLASSHSTNEW_AR EDA_ETL-0.0.51-SNAPSHOT_LoadGlassHst.jar --server.port=8131 --project_name=EDA_ETL_PROD --job_group=EDA_ETL_HIS --job_name=LoadGlassHst_AR
    
    java -Xms3072m -Xmx12288m -jar -Dspring.profiles.active=edanew-eda -Dlog.home=GLASSHSTNEW_CF EDA_ETL-0.0.51-SNAPSHOT_LoadGlassHst.jar --server.port=8088 --project_name=EDA_ETL_PROD --job_group=EDA_ETL_HIS --job_name=LoadGlassHst_CF
    
    
    java -Xms3072m -Xmx12288m -jar -Dspring.profiles.active=edanew-eda -Dlog.home=GLASSHSTNEW_OC_CELL1 EDA_ETL-0.0.51-SNAPSHOT_LoadGlassHst.jar --server.port=8057 --project_name=EDA_ETL_PROD --job_group=EDA_ETL_HIS --job_name=LoadGlassHst_OC_CELL1
    
    
    java -Xms3072m -Xmx12288m -jar -Dspring.profiles.active=edanew-eda -Dlog.home=GLASSHSTNEW_OC_CELL2 EDA_ETL-0.0.51-SNAPSHOT_LoadGlassHst.jar --server.port=8056 --project_name=EDA_ETL_PROD --job_group=EDA_ETL_HIS --job_name=LoadGlassHst_OC_CELL2
    
    
    java -Xms512m -Xmx2048m -jar -Dspring.profiles.active=edanew2-eda -Dlog.home=CHAMBERHST2GP EDA_ETL-0.0.51-SNAPSHOT_CHAMBERHST2GP.jar --server.port=8079 --project_name=EDA_ETL_PROD --job_group=EDA_ETL --job_name=LoadChamberData2GP
    
    java -Xms3072m -Xmx12288m -jar -Dspring.profiles.active=edanew2-eda -Dlog.home=EQPTEVENT2GP EDA_ETL-0.0.51-SNAPSHOT_EQEVENT2GP.jar --server.port=8029 --project_name=EDA_ETL_PROD --job_group=EDA_ETL --job_name=LoadEqEventData2GP
    
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=edanew2-eda -Dlog.home=GP6MDS_ACF EDA_ETL-0.0.51-SNAPSHOT_MDS_PDS.jar --server.port=8131 --project_name=EDA_ETL_PROD --job_group=EDA_ETL --job_name=LoadEdcDataGP6MdsACF
    
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=edanew2-eda -Dlog.home=GP6PDS EDA_ETL-0.0.51-SNAPSHOT_MDS_PDS.jar --server.port=8133 --project_name=EDA_ETL_PROD --job_group=EDA_ETL --job_name=LoadEdcDataGP6PdsACF
    
    
    java -Xms512m -Xmx1024m -jar -Dspring.profiles.active=edanew3-eda -Dlog.home=LoadDefectData2MES /opt/servers/etl/LoadDefectData2MES.jar --server.port=8993 --project_name=LoadDefectData2MES --job_group=LoadDefectData2MES --job_name=LoadDefectData2MES
    
    
    java -Xms512m -Xmx1024m -jar -Dspring.profiles.active=edanew3 -Dlog.home=HeartBeat /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT-20220720.jar --server.port=8992 --project_name=Monitor_MQ --job_group=Monitor_DATA --job_name=HeartBeat
    
    
    java -Xms512m -Xmx1024m -jar -Dspring.profiles.active=edanew3-eda -Dlog.home=LoadMachiePauseHis /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT-20220720.jar --server.port=8229 --project_name=EDA_ETL_PROD --job_group=EDA_ETL --job_name=LoadMachiePauseHis
    
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadGlassHst.jar /home/scripts/etl/ETL-LoadGlassHst_AR.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadGlassHst.jar /home/scripts/etl/ETL-LoadGlassHst_CF.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadGlassHst.jar /home/scripts/etl/ETL-LoadGlassHst_OC_CELL1.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadGlassHst.jar /home/scripts/etl/ETL-LoadGlassHst_OC_CELL2.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_CHAMBERHST2GP.jar /home/scripts/etl/ETL-LoadChamberData2GP.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_EQEVENT2GP.jar /home/scripts/etl/ETL-LoadEqEventData2GP.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_MDS_PDS.jar /home/scripts/etl/ETL-LoadEdcDataGP6MdsACF.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_MDS_PDS.jar /home/scripts/etl/ETL-LoadEdcDataGP6PdsACF.jar
    ln -nfs /opt/servers/etl/LoadDefectData2MES.jar /home/scripts/etl/ETL-LoadDefectData2MES.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT-20220720.jar /home/scripts/etl/ETL-HeartBeat.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT-20220720.jar /home/scripts/etl/ETL-LoadMachiePauseHis.jar
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    43 ETL
    java -Xms512m -Xmx1024m -jar -Dspring.profiles.active=edc -Dlog.home=LoadEdcData_OC /opt/servers/etl/LoadEdcData_OC.jar --server.port=8099 --project_name=REPORT_ETL_PROD --job_group=EDC_ETL_HIS --job_name=LoadEdcData_OC
    ln -nfs /opt/servers/etl/LoadEdcData_OC.jar /home/scripts/etl/ETL-LoadEdcData_OC.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=etl-codis-eda -Dlog.home=loadRedis /opt/servers/etl/EDA_ETL-LoadProdhisToRedis.jar --server.port=8045 --project_name=REDIS_ETL_PROD --job_group=ADD_DATA_ETL --job_name=LoadProductHisToRedis
    ln -nfs /opt/servers/etl/EDA_ETL-LoadProdhisToRedis.jar /home/scripts/etl/ETL-LoadProductHisToRedis.jar
    
    java -Xms512m -Xmx2048m -jar -Dspring.profiles.active=etl-eda -Dlog.home=DELETEHISDATA EDA_ETL-0.0.51-SNAPSHOT_DELETE_HISDATA.jar --server.port=8026 --project_name=DEL_ETL_PROD --job_group=DEL_ETL --job_name=DeleteHisData
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_DELETE_HISDATA.jar /home/scripts/etl/ETL-DeleteHisData.jar
    
    java -Xms1024m -Xmx2048m -jar -Dspring.profiles.active=ct1-eda -Dlog.home=deleteTempAlarm /opt/servers/etl/EDA_ETL-deleteTempAlarm.jar --server.port=8047 --project_name=DELETE_TEMP_ALARM --job_group=DELETE_TEMP_ALARM --job_name=DELETE_TEMP_ALARM
    ln -nfs /opt/servers/etl/EDA_ETL-deleteTempAlarm.jar /home/scripts/etl/ETL-DELETE_TEMP_ALARM.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=edc-eda -Dlog.home=OC /opt/servers/etl/LoadEdcData.jar --server.port=8099 --project_name=REPORT_ETL_PROD --job_group=EDC_ETL_HIS --job_name=LoadEdcData_OC
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_OC.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=edc-eda -Dlog.home=CF /opt/servers/etl/LoadEdcData.jar --server.port=8098 --project_name=REPORT_ETL_PROD --job_group=EDC_ETL_HIS --job_name=LoadEdcData_CF
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_CF.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=edc-eda -Dlog.home=CF2 /opt/servers/etl/LoadEdcData.jar --server.port=8028 --project_name=REPORT_ETL_PROD --job_group=EDC_ETL_HIS --job_name=LoadEdcData_CF2
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_CF2.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=edc-eda -Dlog.home=ARRAY /opt/servers/etl/LoadEdcData.jar --server.port=8097 --project_name=REPORT_ETL_PROD --job_group=EDC_ETL_HIS --job_name=LoadEdcData_ARRAY
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_ARRAY.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=edc-eda -Dlog.home=ARRAY2 /opt/servers/etl/LoadEdcData.jar --server.port=8025 --project_name=REPORT_ETL_PROD --job_group=EDC_ETL_HIS --job_name=LoadEdcData_ARRAY2
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_ARRAY2.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=edc -Dlog.home=ARRAY /opt/servers/etl/LoadEdcData_byEQPT.jar --server.port=8029 --project_name=REPORT_ETL_PROD --job_group=EDC_ETL_HIS --job_name=LoadEdcData_ARRAY4
    ln -nfs /opt/servers/etl/LoadEdcData_byEQPT.jar /home/scripts/etl/ETL-LoadEdcData_ARRAY4.jar
    
    java -Xms512m -Xmx1024m -jar -Dspring.profiles.active=base-eda -Dlog.home=LoadBasic /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_loadBasic.jar --server.port=8056 --project_name=LOAD_BASIC --job_group=LOAD_BASIC --job_name=LoadBasicData
    ln -nfs /opt/servers/etl/LoadBasicData.jar /home/scripts/etl/ETL-LoadBasicData.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=ct1-eda -Dlog.home=CT1 /opt/servers/etl/EDA_ETL-20191101.jar --server.port=8043 --project_name=CT1_ETL --job_group=CT1_ETL --job_name=LoadCt1Data
    ln -nfs /opt/servers/etl/EDA_ETL-20191101.jar /home/scripts/etl/ETL-LoadCt1Data.jar
    
    java -Xms1024m -Xmx4096m -jar -Dspring.profiles.active=etl-eda -Dlog.home=LoadHmsData /opt/servers/etl/EDA_ETL-MONITOR_HMS.jar --server.port=8049 --project_name=MONITOR_HMS --job_group=MONITOR_HMS --job_name=LoadHmsData
    ln -nfs /opt/servers/etl/EDA_ETL-MONITOR_HMS.jar /home/scripts/etl/ETL-LoadHmsData.jar
    
    java -Xms512m -Xmx2048m -jar -Dspring.profiles.active=edanew-eda -Dlog.home=LoadQtapSummaryToGP /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadQtapSummaryToGP.jar --server.port=8155 --project_name=EDA_ETL_PROD --job_group=EDA_ETL --job_name=LoadQtapSummaryToGP
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadQtapSummaryToGP.jar /home/scripts/etl/ETL-LoadQtapSummaryToGP.jar
    
    
    java -Xms512m -Xmx1024m -jar -Dspring.profiles.active=etl-codis-eda -Dlog.home=LoadSpecialOpeToRedis /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadSpecialOpeToRedis.jar --server.port=8133 --project_name=EDA_ETL_PROD --job_group=EDA_ETL --job_name=LoadSpecialOpeToRedis
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadSpecialOpeToRedis.jar /home/scripts/etl/ETL-LoadSpecialOpeToRedis.jar
    
    
    java -Xms1024m -Xmx2048m -jar -Dspring.profiles.active=edanew-codis-eda -Dlog.home=LoadTempAlarmToSubAlarm /opt/servers/etl/EDA_ETL-LoadTempAlarmToSubAlarm_0724.jar --server.port=8046 --project_name=LoadTempAlarmToSubAlarm --job_group=LoadTempAlarmToSubAlarm --job_name=LoadTempAlarmToSubAlarm
    ln -nfs /opt/servers/etl/EDA_ETL-LoadTempAlarmToSubAlarm_0724.jar /home/scripts/etl/ETL-LoadTempAlarmToSubAlarm.jar
    
    
    java -jar -Dspring.profiles.active=rptetledadb -Dlog.home=MES_DEFECT /opt/servers/etl/EDA_ETL-0.0.30-SNAPSHOT_MES_SUM.jar --server.port=8181 --project_name=RPT_ETL_PROD --job_group=RPT_ETL_MES --job_name=LoadMESDefectData
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.30-SNAPSHOT_MES_SUM.jar /home/scripts/etl/ETL-LoadMESDefectData.jar
    
    java -jar -Dspring.profiles.active=rptetledadb -Dlog.home=MES_GLASS /opt/servers/etl/EDA_ETL-0.0.30-SNAPSHOT_MES_SUM.jar --server.port=8182 --project_name=RPT_ETL_PROD --job_group=RPT_ETL_MES --job_name=LoadMESGlassData
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.30-SNAPSHOT_MES_SUM.jar /home/scripts/etl/ETL-LoadMESGlassData.jar
    
    java -Xms512m -Xmx1024m -jar -Dspring.profiles.active=edanew2-eda -Dlog.home=PartitionBuild_EDAGP6 /opt/servers/etl/PartitionBuild_EDAGP6.jar --server.port=8832 --project_name=PART_ETL_PROD --job_group=PART_ETL --job_name=PartitionBuild_EDAGP6
    ln -nfs /opt/servers/etl/PartitionBuild_EDAGP6.jar /home/scripts/etl/ETL-PartitionBuild_GP6.jar
    
    
    
    ln -nfs /opt/servers/etl/LoadEdcData_OC.jar /home/scripts/etl/ETL-LoadEdcData_OC.jar
    ln -nfs /opt/servers/etl/EDA_ETL-LoadProdhisToRedis.jar /home/scripts/etl/ETL-LoadProductHisToRedis.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_DELETE_HISDATA.jar /home/scripts/etl/ETL-DeleteHisData.jar
    ln -nfs /opt/servers/etl/opt/servers/etl/EDA_ETL-deleteTempAlarm.ja /home/scripts/etl/ETL-DELETE_TEMP_ALARM.jar
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_OC.jar
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_CF.jar
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_CF2.jar
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_ARRAY.jar
    ln -nfs /opt/servers/etl/LoadEdcData.jar /home/scripts/etl/ETL-LoadEdcData_ARRAY2.jar
    ln -nfs /opt/servers/etl/LoadEdcData_byEQPT.jar /home/scripts/etl/ETL-LoadEdcData_ARRAY4.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_loadBasic.jar /home/scripts/etl/ETL-LoadBasicData.jar
    ln -nfs /opt/servers/etl/EDA_ETL-20191101.jar /home/scripts/etl/ETL-LoadCt1Data.jar
    ln -nfs /opt/servers/etl/EDA_ETL-MONITOR_HMS.jar /home/scripts/etl/ETL-LoadHmsData.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadQtapSummaryToGP.jar /home/scripts/etl/ETL-LoadQtapSummaryToGP.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_LoadSpecialOpeToRedis.jar /home/scripts/etl/ETL-LoadSpecialOpeToRedis.jar
    ln -nfs /opt/servers/etl/EDA_ETL-LoadTempAlarmToSubAlarm_0724.jar /home/scripts/etl/ETL-LoadTempAlarmToSubAlarm.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.30-SNAPSHOT_MES_SUM.jar /home/scripts/etl/ETL-LoadMESDefectData.jar
    ln -nfs /opt/servers/etl/EDA_ETL-0.0.30-SNAPSHOT_MES_SUM.jar /home/scripts/etl/ETL-LoadMESGlassData.jar
    ln -nfs /opt/servers/etl/PartitionBuild_EDAGP6.jar /home/scripts/etl/ETL-PartitionBuild_EDAGP6.jar
    ln -nfs opt/servers/etl/EDA_ETL-DeleteRedis-.jar /home/scripts/etl/ETL-DeleteRedisDataNew.jar
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82

    ETL管理工具介绍

    该工具是脚本的管理工具不是一个具体的启动脚本,目前托管的是ETL的脚本,支持多个参数和选项。

    cat <<-EOF
    		     options:
    		        -s            --start job or jobs
                    -k            --kill job or jobs
                    -u            --update etlConfD tbale
                    -c            --check job status
                    -d            --select db qms/eda
    				-t            --show etlConfD tbale
                    -h            --help show help info 
                    -v            --version show version 
    		    usage:
    		        etl -k LoadGlassHst_AR -c LoadGlassHst_AR
    		        etl -k LoadGlassHst_AR,LoadGlassHst_CF,LoadGlassHst_OC_CELL1    
    		        etl -k all 
    		        etl -s all -d qms  
    		        etl -s all -d eda  
    		        etl -s  LoadGlassHst_AR
                    # etl -s  LoadGlassHst_AR -db qms V1.0 不支持
    		        etl -s  LoadGlassHst_AR,LoadGlassHst_CF,LoadGlassHst_OC_CELL1 
    		        # etl -s  LoadGlassHst_AR,LoadGlassHst_CF,LoadGlassHst_OC_CELL1 -db qms V1.0 不支持
    		        etl -u LoadGlassHst_AR 
                    etl -u LoadGlassHst_AR,LoadGlassHst_CF,LoadGlassHst_OC_CELL1 
    		        etl -u all
    		        etl -c LoadGlassHst_AR,LoadGlassHst_CF -t yes
    		        etl -c LoadGlassHst_AR -t yes LoadGlassHst_AR 
                    etl -c all
    				etl -t yes LoadGlassHst_AR 
    	
    		       ...
    	EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30

    目前支持以下操作

    ⭐️支持一个和多个job更新etlConfD的配置

    ⭐️支持一个和多个job检查etlConfD的状态

    ⭐️支持一键切换数据库(V1.1 支持对单个job切换数据库)

    ⭐️支持启动一个或多个job

    ⭐️支持停一个或多个job,并更新对应的etlConfD

    ⭐️

    后续支持的操作

    使用ETL管理工具需要的依赖

    • mysql客户端

      mysql  Ver 14.14 Distrib 5.6.10, for Linux (x86_64)
      
      • 1
    • jobInfo 配置信息

    46

    --job_name=LoadGlassHst_AR_TEST:--server.port=8765:/home/scripts/etl/Apshell/run_scripts/start-LoadGlassHst_AR_TEST.sh
    --job_name=LoadGlassHst_AR:--server.port=8131:/home/scripts/etl/Apshell/run_scripts/start-LoadGlassHst_AR.sh
    --job_name=LoadGlassHst_CF:--server.port=8088:/home/scripts/etl/Apshell/run_scripts/start-LoadGlassHst_CF.sh
    --job_name=LoadGlassHst_OC_CELL1:--server.port=8057:/home/scripts/etl/Apshell/run_scripts/start-LoadGlassHst_OC_CELL1.sh
    --job_name=LoadGlassHst_OC_CELL2:--server.port=8056:/home/scripts/etl/Apshell/run_scripts/start-LoadGlassHst_OC_CELL2.sh
    --job_name=LoadChamberData2GP:--server.port=8079:/home/scripts/etl/Apshell/run_scripts/start-LoadChamberData2GP.sh
    --job_name=LoadEqEventData2GP:--server.port=8029:/home/scripts/etl/Apshell/run_scripts/start-LoadEqEventData2GP.sh
    --job_name=LoadEdcDataGP6MdsACF:--server.port=8131:/home/scripts/etl/Apshell/run_scripts/start-LoadEdcDataGP6MdsACF.sh
    --job_name=LoadEdcDataGP6PdsACF:--server.port=8133:/home/scripts/etl/Apshell/run_scripts/start-LoadEdcDataGP6PdsACF.sh
    --job_name=LoadDefectData2MES:--server.port=8993:/home/scripts/etl/Apshell/run_scripts/start-LoadDefectData2MES.sh
    --job_name=HeartBeat:--server.port=8992:/home/scripts/etl/Apshell/run_scripts/start-HeartBeat.sh
    --job_name=LoadMachiePauseHis:--server.port=8229:/home/scripts/etl/Apshell/run_scripts/start-LoadMachiePauseHis.sh
    
    
    --job_name=LoadGlassHst_AR_TEST:--server.port=8765:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadGlassHst_AR_TEST.sh
    --job_name=LoadGlassHst_AR:--server.port=8131:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadGlassHst_AR.sh
    --job_name=LoadGlassHst_CF:--server.port=8088:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadGlassHst_CF.sh
    --job_name=LoadGlassHst_OC_CELL1:--server.port=8057:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadGlassHst_OC_CELL1.sh
    --job_name=LoadGlassHst_OC_CELL2:--server.port=8056:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadGlassHst_OC_CELL2.sh
    --job_name=LoadChamberData2GP:--server.port=8079:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadChamberData2GP.sh
    --job_name=LoadEqEventData2GP:--server.port=8029:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadEqEventData2GP.sh
    --job_name=LoadEdcDataGP6MdsACF:--server.port=8131:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadEdcDataGP6MdsACF.sh
    --job_name=LoadEdcDataGP6PdsACF:--server.port=8133:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadEdcDataGP6PdsACF.sh
    --job_name=LoadDefectData2MES:--server.port=8993:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadDefectData2MES.sh
    --job_name=HeartBeat:--server.port=8992:/home/scripts/etl/Apshell_eda/run_scripts/start-HeartBeat.sh
    --job_name=LoadMachiePauseHis:--server.port=8229:/home/scripts/etl/Apshell_eda/run_scripts/start-LoadMachiePauseHis.sh
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26

    43

    --job_name=LoadEdcData_ARRAY:--server.port=8097:start-LoadEdcData_ARRAY.sh
    --job_name=LoadEdcData_ARRAY2:--server.port=8025:start-LoadEdcData_ARRAY2.sh
    --job_name=LoadEdcData_ARRAY4:--server.port=8029:start-LoadEdcData_ARRAY4.sh
    --job_name=LoadEdcData_CF:--server.port=8098:start-LoadEdcData_CF.sh
    --job_name=LoadEdcData_CF2:--server.port=8028:start-LoadEdcData_CF2.sh
    --job_name=LoadEdcData_OC:--server.port=8099:start-LoadEdcData_OC.sh
    --job_name=LoadBasicData:--server.port=8056:start-LoadBasicData.sh
    --job_name=LoadCt1Data:--server.port=8043:start-LoadCt1Data.sh
    --job_name=LoadHmsData:--server.port=8049:start-LoadHmsData.sh
    --job_name=LoadQtapSummaryToGP:--server.port=8155:start-LoadQtapSummaryToGP.sh
    --job_name=LoadSpecialOpeToRedis:--server.port=8133:start-LoadSpecialOpeToRedis.sh
    --job_name=LoadTempAlarmToSubAlarm:--server.port=8046:start-LoadTempAlarmToSubAlarm.sh
    --job_name=LoadMESDefectData:--server.port=8181:start-LoadMESDefectData.sh
    --job_name=LoadMESGlassData:--server.port=8182:start-LoadMESGlassData.sh
    --job_name=PartitionBuild_EDAGP6:--server.port=8132:start-PartitionBuild_EDAGP6.sh
    --job_name=PartitionBuild:--server.port=18132:start-PartitionBuild.sh
    --job_name=LoadProductHisToRedis:--server.port=8045:start-LoadProductHisToRedis.sh
    --job_name=DeleteHisData:--server.port=8026:start-DeleteHisData.sh
    --job_name=DELETE_TEMP_ALARM:--server.port=8047:start-DELETE_TEMP_ALARM.sh
    --job_name=DeleteRedisDataNew:--server.port=8947:start-DeleteRedisDataNew.sh
    Job_ProdId_AddTo_GP::LoadProdIdToGpSystem_start.sh
    
    --job_name=LoadEdcData_ARRAY:--server.port=8097:/home/scripts/etl/Apshell/start-LoadEdcData_ARRAY.sh
    --job_name=LoadEdcData_ARRAY2:--server.port=8025:/home/scripts/etl/Apshell/start-LoadEdcData_ARRAY2.sh
    --job_name=LoadEdcData_ARRAY4:--server.port=8029:/home/scripts/etl/Apshell/start-LoadEdcData_ARRAY4.sh
    --job_name=LoadEdcData_CF:--server.port=8098:/home/scripts/etl/Apshell/start-LoadEdcData_CF.sh
    --job_name=LoadEdcData_CF2:--server.port=8028:/home/scripts/etl/Apshell/start-LoadEdcData_CF2.sh
    --job_name=LoadEdcData_OC:--server.port=8099:/home/scripts/etl/Apshell/start-LoadEdcData_OC.sh
    --job_name=LoadBasicData:--server.port=8056:/home/scripts/etl/Apshell/start-LoadBasicData.sh
    --job_name=LoadCt1Data:--server.port=8043:/home/scripts/etl/Apshell/start-LoadCt1Data.sh
    --job_name=LoadHmsData:--server.port=8049:/home/scripts/etl/Apshell/start-LoadHmsData.sh
    --job_name=LoadQtapSummaryToGP:--server.port=8155:/home/scripts/etl/Apshell/start-LoadQtapSummaryToGP.sh
    --job_name=LoadSpecialOpeToRedis:--server.port=8133:/home/scripts/etl/Apshell/start-LoadSpecialOpeToRedis.sh
    --job_name=LoadTempAlarmToSubAlarm:--server.port=8046:/home/scripts/etl/Apshell/start-LoadTempAlarmToSubAlarm.sh
    --job_name=LoadTempAlarmToSubAlarm:--server.port=8046:/home/scripts/etl/Apshell/start-LoadTempAlarmToSubAlarm.sh
    --job_name=LoadMESDefectData:--server.port=8181:/home/scripts/etl/Apshell/start-LoadMESDefectData.sh
    --job_name=LoadMESGlassData:--server.port=8182:/home/scripts/etl/Apshell/start-LoadMESGlassData.sh
    --job_name=PartitionBuild_GP6:--server.port=8132:/home/scripts/etl/Apshell/start-PartitionBuild_GP6.sh
    --job_name=LoadProductHisToRedis:--server.port=8045:/home/scripts/etl/Apshell/start-LoadProductHisToRedis.sh
    --job_name=DeleteHisData:--server.port=8026:/home/scripts/etl/Apshell/start-DeleteHisData.sh
    --job_name=DELETE_TEMP_ALARM:--server.port=8047:/home/scripts/etl/Apshell/start-DELETE_TEMP_ALARM.sh
    
    --job_name=LoadEdcData_ARRAY:--server.port=8097:/home/scripts/etl/Apshell_eda/start-LoadEdcData_ARRAY.sh
    --job_name=LoadEdcData_ARRAY2:--server.port=8025:/home/scripts/etl/Apshell_eda/start-LoadEdcData_ARRAY2.sh
    --job_name=LoadEdcData_ARRAY4:--server.port=8029:/home/scripts/etl/Apshell_eda/start-LoadEdcData_ARRAY4.sh
    --job_name=LoadEdcData_CF:--server.port=8098:/home/scripts/etl/Apshell_eda/start-LoadEdcData_CF.sh
    --job_name=LoadEdcData_CF2:--server.port=8028:/home/scripts/etl/Apshell_eda/start-LoadEdcData_CF2.sh
    --job_name=LoadEdcData_OC:--server.port=8099:/home/scripts/etl/Apshell_eda/start-LoadEdcData_OC.sh
    --job_name=LoadBasicData:--server.port=8056:/home/scripts/etl/Apshell_eda/start-LoadBasicData.sh
    --job_name=LoadCt1Data:--server.port=8043:/home/scripts/etl/Apshell_eda/start-LoadCt1Data.sh
    --job_name=LoadHmsData:--server.port=8049:/home/scripts/etl/Apshell_eda/start-LoadHmsData.sh
    --job_name=LoadQtapSummaryToGP:--server.port=8155:/home/scripts/etl/Apshell_eda/start-LoadQtapSummaryToGP.sh
    --job_name=LoadSpecialOpeToRedis:--server.port=8133:/home/scripts/etl/Apshell_eda/start-LoadSpecialOpeToRedis.sh
    --job_name=LoadTempAlarmToSubAlarm:--server.port=8046:/home/scripts/etl/Apshell_eda/start-LoadTempAlarmToSubAlarm.sh
    --job_name=LoadTempAlarmToSubAlarm:--server.port=8046:/home/scripts/etl/Apshell_eda/start-LoadTempAlarmToSubAlarm.sh
    --job_name=LoadMESDefectData:--server.port=8181:/home/scripts/etl/Apshell_eda/start-LoadMESDefectData.sh
    --job_name=LoadMESGlassData:--server.port=8182:/home/scripts/etl/Apshell_eda/start-LoadMESGlassData.sh
    --job_name=PartitionBuild_EDAGP6:--server.port=18132:/home/scripts/etl/Apshell_eda/start-PartitionBuild_EDAGP6.sh
    --job_name=LoadProductHisToRedis:--server.port=8045:/home/scripts/etl/Apshell_eda/start-LoadProductHisToRedis.sh
    --job_name=DeleteHisData:--server.port=8026:/home/scripts/etl/Apshell_eda/start-DeleteHisData.sh
    --job_name=DELETE_TEMP_ALARM:--server.port=8047:/home/scripts/etl/Apshell_eda/start-DELETE_TEMP_ALARM.sh
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    启动脚本
    sh /opt/servers/etl/run-etl-prod.sh 512m 2048m etl DELETEHISDATA DELETE_HISDATA.jar 8026 DEL_ETL_PROD DEL_ETL DeleteHisData
    
    java -Xms512m -Xmx2048m -jar -Dspring.profiles.active=etl-eda -Dlog.home=DELETEHISDATA /opt/servers/etl/EDA_ETL-0.0.51-SNAPSHOT_DELETE_HISDATA.jar --server.port=8026 --project_name=DEL_ETL_PROD --job_group=DEL_ETL --job_name=DeleteHisData
    
    
    
    more /opt/servers/etl/run-etl-prod.sh
    MIN_SIZE=${1:-1024m}
    MAX_SIZE=${2:-1024m}
    PROFILE_NAME=${3:-test}
    LOG_HOME_NAME=${4:-TEST}
    JAR_NAME=${5:-TEST}
    PORT=${6:-8080}
    PROJECTNAME=${7:-EDA_ETL_PROD}
    GROUPNAME=${8:-EDA_ETL}
    JOBNAME=${9:-LoadTest}
    
    setsid java -Xms${MIN_SIZE} -Xmx${MAX_SIZE} -jar -Dspring.profiles.active=${PROFILE_NAME} -Dlog.home=${LOG_HOME_NAME} ${JAR_NAME} --server.port=${PORT} --project_name=${PROJECTNAM
    E} --job_group=${GROUPNAME} --job_name=${JOBNAME} > /dev/null &
    PID=$(ps -ef|grep ${LOG_HOME_NAME}|grep java |grep -v grep | awk '{print $2}')
    echo "Run EDA_ETL ${LOG_HOME_NAME}:${PORT}, PID: $PID"
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    如何更新job 的run_flg
    # 先判断mysql 命令是否可用?
    mysql -h 10.50.10.180 -uroot -pchot123 -e "use ch_qms; select job_name from etl_conf_d where JOB_GROUP_NAME like '%edc%' and VALID_FLG = 'Y' limit 1"
    
    mysql -h 10.50.10.180 -uroot -pchot123 -e "use ch_qms; UPDATE etl_conf_d  SET RUN_FLG='N',ETL_TIMESTAMP=now() where JOB_NAME like '%LoadGlassHst_AR_TEST%' and VALID_FLG = 'Q'"
    
    select JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = 'LoadGlassHst_AR_TEST' and VALID_FLG = 'Q'
    
    
    ## 健壮性的代码
    MYSQL=$(which mysql)
    ${MYSQL} -h 10.50.10.180 -uroot -pchot123 << EOF
    ${updateStatement}
    EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    mysql 踩坑

    1、

    mysql> use
    ERROR:
    USE must be followed by a database name
    
    • 1
    • 2
    • 3
    local updateStatement="use ch_qms; UPDATE etl_conf_d  SET RUN_FLG='N',ETL_TIMESTAMP=now() where JOB_NAME  = '${jobName}' and VALID_FLG = 'Q'"
    mysqlUtil "${updateStatement}"
    
    • 1
    • 2

    传参数的时候要使用双引号把参数引起来, 否则只能传给mysqkUtil 的在第一个空格处就截断了.

    2、mysql 的输出到变量中为何变成了-s的模式

    -t, --table Output in table format

    -s, --silent Be more silent. Print results with a tab as separator,each row on new line.

    mysql -h 10.50.10.180  -uroot -pchot123 -s -e "use ch_qms; select JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = 'LoadGlassHst_AR_TEST' and VALID_FLG = 'Q'"
    
    • 1

    加上参数-t可以解决这个问题。

    grep 的高级玩法

    如果有以下文本:

    abcde
    abcd
    abc
    ab
    
    • 1
    • 2
    • 3
    • 4

    🏑 如果让你用grep过滤出abc你会怎么做?

    答: 正则中的\b 边界可以解决这个问题。

    grep -E '\babc\b' test.txt
    
    • 1

    -E 开启正则,左右边界都卡主.

    🏑如果让你用grep过滤出abc或者ab你会怎么做?

    正则中的| 表达式

    grep -E 'abc|ab' test.txt
    
    • 1

    🏑如果让你用grep过滤出abc和ab你会怎么做?

    grep -E '\babc\b|\bab\b' test.txt
    
    • 1
    • 把 LoadGlassHst_AR,LoadGlassHst_CF 用sed 替换成 ‘\bLoadGlassHst_AR\b|\bLoadGlassHst_CF \b’
    sed -r 's/,/\\b|\\b/g' |sed -r 's/^/\\b/'|sed -r 's/$/\\b/'
    \bLoadGlassHst_AR\b|\bLoadGlassHst_CF\b
    
    • 1
    • 2

    当然awk也可以完成这个需求

    awk 的高级玩法

    grep 功能虽强大,但是不够灵活,awk 可以在表达式中使用判断表达式。

    abcde
    abcd
    abc
    ab
    
    • 1
    • 2
    • 3
    • 4

    使用awk过滤出abc?

    代码测试
     _DEBUG=on bash etl.sh -c LoadGlassHst_AR,LoadGlassHst_CF -t yes
     _DEBUG=on  bash  etl.sh -c all -t yes
    
    • 1
    • 2
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$_DEBUG=on bash etl_0803.sh -k LoadEdcData_ARRAY,LoadEdcData_ARRAY2,^CadEdcData_ARRAY4
    [2022-08-03 14:56:17][DEBUG] k option, value is :LoadEdcData_ARRAY,LoadEdcData_ARRAY2,^CadEdcData_ARRAY4
    [2022-08-03 14:56:17][DEBUG] getopts 正在处理的参数位置 : 2
    [2022-08-03 14:56:17][DEBUG] stopetl() first par: LoadEdcData_ARRAY,LoadEdcData_ARRAY2,^CadEdcData_ARRAY4
    [2022-08-03 14:56:17][INFO] stopetl() etl_list : LoadEdcData_ARRAY,LoadEdcData_ARRAY2,^CadEdcData_ARRAY4
    [2022-08-03 14:56:17][DEBUG] line 144 JobName:LoadEdcData_ARRAY,LoadEdcData_ARRAY2,^CadEdcData_ARRAY4
    [2022-08-03 14:56:17][DEBUG] cond: LoadEdcData_ARRAY|LoadEdcData_ARRAY2|^CadEdcData_ARRAY4
    [2022-08-03 14:56:17][DEBUG] killpid() first par: LoadEdcData_ARRAY|LoadEdcData_ARRAY2|^CadEdcData_ARRAY4
    [2022-08-03 14:56:17][DEBUG] killpid() jobNameRaw par: LoadEdcData_ARRAY|LoadEdcData_ARRAY2|^CadEdcData_ARRAY4
    [2022-08-03 14:56:17][DEBUG] killpid() JobName: ; pid: 1222 6184 11495 15972 16071 16223 16493 16668 16805 16950 17451 17514 19494 20049 26777 30443 31005 34263 34451 34631 41750 41803 41868 41933 41998 42063 42130 42195 42309 44049 48566 48920 ,  ps -ef | grep -E  |grep -v grep | grep -v etl_0803.sh |  awk '{print $2}' | awk -vORS=  '{print etl_0803.sh}'
    [2022-08-03 14:56:17][INFO] killpid() JobName: ,pid: 1222 6184 11495 15972 16071 16223 16493 16668 16805 16950 17451 17514 19494 20049 26777 30443 31005 34263 34451 34631 41750 41803 41868 41933 41998 42063 42130 42195 42309 44049 48566 48920 : is running
    [2022-08-03 14:56:17][INFO] killpid() JobName:  begin stop!!!
    [2022-08-03 14:56:19][INFO] killpid() JobName:  stop finish!!!
    [2022-08-03 14:56:19][DEBUG] updateEtlJob() first par:
    [2022-08-03 14:56:19][DEBUG] updateEtlJob() etl_list : , select_op: no
    etl_0803.sh: line 351: [: ==: unary operator expected
    [2022-08-03 14:56:19][INFO] updateEtlJob()  JobName:,select_op: no
    [2022-08-03 14:56:19][DEBUG] updateEtlJob() desireJobCnt:0,actualJobsCnt: 20
    [2022-08-03 14:56:19][WARN] checkEtlJob() 输入的job list中存在未配置的job,您输入的jobList:,请在/home/scripts/etl/etl-list检查是否配置 exit 99
    [2022-08-03 14:56:19][DEBUG] updateEtlJob() grep -E  /home/scripts/etl/etl-list
    [2022-08-03 14:56:19][DEBUG] updateEtlJob():--job_name=LoadEdcData_ARRAY:--server.port=8097:start-LoadEdcData_ARRAY.sh
    [2022-08-03 14:56:19][DEBUG] 遍历jobList,jobName:--job_name=LoadEdcData_ARRAY:--server.port=8097:start-LoadEdcData_ARRAY.sh, jobNameSingle: LoadEdcData_ARRAY, portNameSingle: 8097
    [2022-08-03 14:56:19][DEBUG] updateEtlJob() ps -ef | grep -E LoadEdcData_ARRAY | grep 8097 | grep -v grep | grep java | grep -v etl_0803.sh |  awk '{print $2}'
    [2022-08-03 14:56:19][ERROR] updateEtlJob() jobName:LoadEdcData_ARRAY is not runnping, please check!!!!
    [2022-08-03 14:56:19][DEBUG] updateEtlJob() grep -E  /home/scripts/etl/etl-list
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26

    启停partitionbuild 程序:

    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -s PartitionBuild_GP6
    [2022-08-05 14:42:17][INFO] startetl() first par:PartitionBuild_GP6
    [2022-08-05 14:42:17][INFO] startetl() JobName: PartitionBuild_GP6 is  starting, shell scripts is :/home/scripts/etl/Apshell/start-PartitionBuild_GP6.sh
    setsid java -Xms512m -Xmx1024m -jar -Dspring.profiles.active=edanew2 -Dlog.home=PartitionBuildGP6 ETL-PartitionBuild_GP6.jar --server.port=8132 --project_name=PART_ETL_PROD --job_group=PART_ETL --job_name=PartitionBuild_GP6 &>/dev/null &
    You have mail in /var/spool/mail/root
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -c PartitionBuild_GP6
    [2022-08-05 14:42:22][INFO] start jobs : PartitionBuild_GP6
    [2022-08-05 14:42:22][INFO] checkEtlJob() 您总共输入了: 1支job, 第: 1支, jobName:PartitionBuild_GP6 is runnping, pid is: 30841
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -c PartitionBuild_GP6 -t yes
    [2022-08-05 14:42:30][INFO] start jobs : PartitionBuild_GP6 yes
    [2022-08-05 14:42:30][INFO] checkEtlJob() 您总共输入了: 1支job, 第: 1支, jobName:PartitionBuild_GP6 is runnping, pid is: 30841
    [2022-08-05 14:42:32][INFO] ---------------checkEtlJob() job info -----------------
    Warning: Using a password on the command line interface can be insecure.
    shop    JOB_NAME        RUN_START_TIMESTAMP     RUN_END_TIMESTAMP       VALID_FLG       RUN_FLG ETL_TIMESTAMP
    ARRAY   PartitionBuild_GP6      2022-08-04 00:00:00     2022-08-05 00:00:00     Y       N       2022-08-05 14:41:49
    
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    容错测试
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -c PartitionBuild_GP62
    [2022-08-05 14:45:33][INFO] start jobs : PartitionBuild_GP62
    [2022-08-05 14:45:33][WARN] checkEtlJob() 输入的job list中存在未配置的job,您输入的jobList:PartitionBuild_GP62,请在/home/scripts/etl/etl-list检查是否配置 }
    [2022-08-05 14:45:33][ERROR] checkEtlJob() ,jobName: 未配置,请确认配置文件
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -c PartitionBuild_
    [2022-08-05 14:45:37][INFO] start jobs : PartitionBuild_
    [2022-08-05 14:45:37][WARN] checkEtlJob() 输入的job list中存在未配置的job,您输入的jobList:PartitionBuild_,请在/home/scripts/etl/etl-list检查是否配置 }
    [2022-08-05 14:45:37][ERROR] checkEtlJob() ,jobName: 未配置,请确认配置文件
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -s PartitionBuild_
    [2022-08-05 14:45:44][ERROR] startEtlJob() 输入的job list中存在未配置的job,您输入的jobList:PartitionBuild_,请在/home/scripts/etl/etl-list检查是否配置 }
    [2022-08-05 14:45:44][ERROR] startEtlJob() ,jobName: 未配置,请确认配置文件
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -s PartitionBuild_GP6
    [2022-08-05 14:45:53][INFO] startetl() first par:PartitionBuild_GP6
    [2022-08-05 14:45:53][WARN] startetl() startetl() JobName: PartitionBuild_GP6,30841: is running,please check,eg: etl -c PartitionBuild_GP6 -t yes}
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -k PartitionBuild_GP6
    [2022-08-05 14:46:02][INFO] stopetl() etl_list : PartitionBuild_GP6
    [2022-08-05 14:46:02][INFO] killpid() JobName: PartitionBuild_GP6,pid: 30841 : is running
    [2022-08-05 14:46:02][INFO] killpid() JobName: PartitionBuild_GP6 begin stop!!!
    [2022-08-05 14:46:04][INFO] killpid() JobName: PartitionBuild_GP6 stop finish!!!
    [2022-08-05 14:46:04][INFO] updateEtlJob()  JobName:PartitionBuild_GP6,select_op: no
    Warning: Using a password on the command line interface can be insecure.
    [2022-08-05 14:46:04][INFO] updateEtlJob() res: return: 1
    [2022-08-05 14:46:04][INFO] updateEtlJob() jobs共: 1支,第: 1支, jobName: PartitionBuild_GP6更新成功~~~,update Sql is : use ch_qms;UPDATE etl_conf_d  SET RUN_FLG='N',ETL_TIMESTAMP=now() where JOB_NAME  = 'PartitionBuild_GP6' and VALID_FLG = 'Y';
    [2022-08-05 14:46:08][INFO] killpid() JobName: PartitionBuild_GP6, UJob_NAME:PartitionBuild_GP6 etl config has been configed.
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$
    [(QMSPL1ETL01)root@P1QMSPL1ETL01 /home/scripts/etl]$ bash etl_0805.sh -c PartitionBuild_GP6
    [2022-08-05 14:46:21][INFO] start jobs : PartitionBuild_GP6
    [2022-08-05 14:46:21][ERROR] checkEtlJob() jobName:PartitionBuild_GP6 is not runnping, please check!!!!
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36

    TODO

    • 当输入多个特殊job时(例如LoadGlassHst_AR,LoadGlassHst_AR1),对desireJobCnt和ActualJobCnt的计算引入正则,计算更准确

       sed -r 's/,/\\b|\\b/g' |sed -r 's/^/\\b/'|sed -r 's/$/\\b/'
      \bLoadGlassHst_AR\b|\bLoadGlassHst_CF\b
      
      • 1
      • 2
    • 单独job 支持-s -u 和-d同时使用

    • 接入spug调用.

    • 将ETL管理工具制作成一个二进制文件

    • 支持更安全的数据库连接(mysql_config_editor)

      Warning: Using a password on the command line interface can be insecure.

      如果sql脚本中有明文密码。mysql客户端会给一个warning.

      解决方法:

      ​ mysql_config_editor是MySQL自带的一款用于安全加密登录的工具,可以在一些场合避免使用密码明文,例如,写shell脚本时,不用在为在脚本里面写入明文密码纠结了;也可以用于管理多台MySQL实例。另外,像如果使用mysql命令登录数据库,可以避免每次都要输入一堆参数。简单方便。

    • 考虑对程序版本如何管控

    • 提升脚本得分布式管理能力

    • 兼容MFG ETL

      [(QMSPL1ETL01)root@P1QMSPL1ETL01 /opt/servers/etl/Apshell/run_scripts]$more /opt/cimPro/LXD/ProdIdAddToGp/MFG_ETL/LoadProdIdToGpSystem_start.sh
      source /etc/profile
      . ~/.bash_profile
      #!/bin/bash
      # Job_ProdToGpSystem
      time=$(date)
      #setsid java -cp /opt/cimPro/LXD/ProdIdAddToGp/MFG_ETL/ProdIdToGp/MFG_ETL.jar:/opt/cimPro/LXD/ProdIdAddToGp/MFG_ETL/Util/Util.jar  com.chot.RPT.LoadService.productIdAddToGp.ProductIdAddToGp  Job_ProdId_AddTo
      _GP
      setsid java -cp /opt/cimPro/LXD/ProdIdAddToGp/MFG_ETL/ProdIdToGp/MFG_ETL_edagp_20220722.jar:/opt/cimPro/LXD/ProdIdAddToGp/MFG_ETL/Util/Util.jar  com.chot.RPT.LoadService.productIdAddToGp.ProductIdAddToGp  Jo
      b_ProdId_AddTo_GP
      PID=$(ps -ef|grep Job_ProdId_AddTo_GPgrep java|grep -v grep | awk '{print $2}')
      echo "$PID is running"
      echo "$time Start Job_ProdId_AddTo_GP Successful, PID: $PID">>/opt/cimPro/LXD/ProdIdAddToGp/MFG_ETL/Log/log.log
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
    • 自动下载mysql的客户端

      which: no mysql in (/usr/local/greenplum-loaders-4.3.16.1/bin:/usr/local/greenplum-loaders-4.3.16.1/ext/python/bin:/usr/local/jdk1.8.0_121/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
      
      • 1
    • defectSummary ETL统管

    • 查看log功能

    • check 功能增强,check 应该拿出更多维多的信息例如: 进程所占cpu、内存以及启动时间

      1、ps -p pid -o lstart # 获取进程启动时间

      ps -p 17214 -o lstart
      STARTED
      Thu Oct 14 14:44:57 2021

      2、ps uax 获取内存 cpu

    FAQ

    • 为什么不把启动参数都放在配置项里面?

    ETL管理工具定位是管理工具而不是启停某一个具体的程序。如果有其他ETL 的启动不遵循这个启动参数那就需要在脚本中处理。 脚本提供的功能应该是流程控制。所以应该配置启动命令,类似于一个接口,谁需要谁实现一个脚本 配置在这里

    • 如何防止脚本注入杀死其他程序?

    目前停程序的操作是直接给进程发送kill信号,考虑到如果脚本注入误杀死其他进程。所以在设计的时候从业务上统计出本次期望停止、更新的job数量和实际要操作的job数量作比较

    • ETL工具需要完成的事情

    人/脚本 要判断进程是否已停完

    人/脚本 mysql -etl-conf-d 的run_flg 需要置N

    如何确认已启动 且 已正常运行?

    正常运行的标准是什么?

    ETL管理工具代码实现

    #!/bin/bash
    #==============================================================#
    # File      :   stop-etl
    # Ctime     :   2022年7月20日15:30:11
    # Mtime     :   2022年8月2日15:30:15
    # Usage     :   ./etl start all 
    # author    :   ninesun
    #==============================================================#
    #----------------------------------------------#
    # usage & exit
    #----------------------------------------------#
    VERSION=1.0
    etlListPath=/home/scripts/etl/etl-list
    MYSQL=$(which mysql)
    sqlPar=
    updateStatementSingle="use ch_qms;UPDATE etl_conf_d  SET RUN_FLG='N',ETL_TIMESTAMP=now() where JOB_NAME  = '${sqlPar}' and VALID_FLG = 'Q'"
    selectStatementSingle="use ch_qms;select shop,JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = '${sqlPar}' and VALID_FLG = 'Q'"
    
    # mysql sql 执行方法
    function mysqlUtil(){
    local statement=${1}
    log_debug "mysqlUtil() 正在连接Mysql..........."
    sleep 0.5s
    log_debug "mysqlUtil() 开始执行sql: ${statement}"
    if [[ -n ${statement} ]];then
    	${MYSQL} -h 10.50.10.180 -uroot -pchot123 -t<<EOF
    	${statement}
    EOF
    	 log_debug "mysqlUtil() statement: ${statement} 执行成功"
    else 
    	log_error "mysqlUtil() statement 执行好像失败了~~"
    	exit 99
    fi
    
    }
    function usage() {
    	cat <<-EOF
    		    etl stop/start and update etl-conf-d
    		     options:
    		        -s,--start    start
                    -k,--kill     kill 
                    -u,--update 
                    -c,--check
                    -d,--db
    				-t,--tbale
                    -h,--help        help
                    -v,--version     version
    		    usage:
    		       
    		        etl -k LoadGlassHst_AR   # stop  etl 
    		        etl -k LoadGlassHst_AR,LoadGlassHst_CF,LoadGlassHst_OC_CELL1     # stop etls
    		        etl -k all  # stop all etl from config list
    		        
    		        etl -s all -d qms  # select qms db for all job, db is required,default qmsdb, 先执行-d 的action再执行-s的action
    		        etl -s all -d eda  # select eda db for all job
    		        etl -s  LoadGlassHst_AR -db qms # select qms db for LoadGlassHst_AR
    		        etl -s  LoadGlassHst_AR,LoadGlassHst_CF,LoadGlassHst_OC_CELL1 -db qms# select qms db for LoadGlassHst_AR,LoadGlassHst_CF
    		        
    		        etl -u LoadGlassHst_AR    # update etl-conf-d run_flg=N where LoadGlassHst_AR
    		        etl -u  # update etl-conf-d run_flg=N where readConfig 
    		        etl -c LoadGlassHst_AR # check pid
    				etl -t yes LoadGlassHst_AR # 检查etlconfd的内容,确认是否更新成功. 
    	
    		       ...
    	EOF
    	exit 1
    }
    
    #==============================================================#
    #                             Utils                            #
    #==============================================================#
    # logger functions
    function log_debug() {
        [ "$_DEBUG" == "on" ] && [ -t 2 ] && printf "\033[0;34m[$(date "+%Y-%m-%d %H:%M:%S")][DEBUG] $*\033[0m\n" >&2
    }
    function log_info() {
        [ -t 2 ] && printf "\033[0;32m[$(date "+%Y-%m-%d %H:%M:%S")][INFO] $*\033[0m\n" >&2 ||\
         printf "[$(date "+%Y-%m-%d %H:%M:%S")][INFO] $*\n" >&2
    }
    function log_warn() {
    	#[ "$_WARN" == "on" ] && 
        [ -t 2 ] && printf "\033[0;33m[$(date "+%Y-%m-%d %H:%M:%S")][WARN] $*\033[0m\n" >&2 ||\
         printf "[$(date "+%Y-%m-%d %H:%M:%S")][INFO] $*\n" >&2
    }
    function log_error() {
        [ -t 2 ] && printf "\033[0;31m[$(date "+%Y-%m-%d %H:%M:%S")][ERROR] $*\033[0m\n" >&2 ||\
         printf "[$(date "+%Y-%m-%d %H:%M:%S")][INFO] $*\n" >&2
    }
    
    
    function killpid(){
        log_debug "killpid() first par: $1"
        local jobNameRaw=${1}
        log_debug "killpid() jobNameRaw par: ${jobNameRaw}"
        local sed_cmd1="H;$x;s#\n# #gp" #回车替换为空格
    	if [ -n ${jobNameRaw} ];then 
    		local pid=`ps -ef | grep -E "\b${jobNameRaw}\b" | grep -v grep  | grep -v DEFECT_SUM | grep -vi trans |grep -v queues= | grep java | grep -v $0 |  awk '{print \$2}' |  awk -vORS=" " '{print $0}'`
        else 
    		log_debug "killpid() jobNameRaw is Empty!!!"
    	fi 
    	local desireJobCnt=$(echo ${jobNameRaw}|awk -F '|' '{print NF}') # 打印出列数和实际grep出的比较如果数量不对,代表输入的job中有没有配置的。这一步也是可以避免攻击.类似于一个简单的校验
    	local actualJobsCnt=`grep -cE "\b${jobNameRaw}\b" ${etlListPath}`
    	log_debug "desireJobCnt:${desireJobCnt},actualJobsCnt: ${actualJobsCnt}"
    	if [ ${desireJobCnt} -ne ${actualJobsCnt} ];then log_error "killpid() 输入的job list中存在未配置的job,您输入的jobList:${jobNameRaw},请在${etlListPath}检查是否配置" exit 99;fi		
    	log_debug "killpid() JobName: ${jobNameRaw} , pid: ${pid}, ps -ef | grep -E "${jobNameRaw}" | grep -v grep  | grep -v DEFECT_SUM | grep -vi trans |grep -v queues= | grep java | grep -v $0 |  awk '{print \$2}' |  awk -vORS=" " '{print $0}'"
    	local UJob_NAME=${jobNameRaw}
            if [[ -n ${pid} ]];then
                log_info "killpid() JobName: ${jobNameRaw},pid: ${pid}: is running"
                log_info "killpid() JobName: ${jobNameRaw} begin stop!!!"
                kill -9 $pid
                sleep 2s
                log_info "killpid() JobName: ${jobNameRaw} stop finish!!!"
                updateEtlJob ${UJob_NAME}
                sleep 2s
                log_info "killpid() JobName: ${jobNameRaw}, UJob_NAME:${UJob_NAME} etl config has been configed."
            else
                log_error "killpid() JobName: ${jobNameRaw} is not running!!!"
            fi
    }
    
    function stopetl() {
    log_debug "stopetl() first par: $1"
    local etl_list=${1-'all'}  # default all
    #local etl_list=${1-'/home/scripts/etl/etl-list-46'}  # default download path: /home/scripts/etl/etl-list-46
    log_info "stopetl() etl_list : ${etl_list}" #获取到所有job
    if [ ${etl_list} == "all" ];then
        while read line;do
            time='date +"%F %T"'
            #处理配置
            OLD_IFS=$IFS
            IFS=:
            arr=($line)
            name=${arr[0]}
            port=${arr[1]}
    		jobName=`echo $name |awk -F = '{print $2}'`
    		# portName=`echo $port |awk -F = '{print $2}'`
    		log_info "stopetl() jobName : ${jobName}"
            killpid ${jobName} 
           
            #ps -ef | grep $line|grep -v grep | awk '{print $2}'|xagrs kill -9
        done < "${etlListPath}"
    else 
        # 切分出job 传参kill
        local sed_cmd1="s/,/|/g"
        log_debug "line 144 JobName:${KJob_NAME} "
        local cond=`echo ${KJob_NAME} | sed ${sed_cmd1}`
        log_debug "cond: ${cond} "
        killpid $cond
        #local arr=(${KJob_NAME}) # 赋值给数组
        #OLD_IFS=$IFS
        #IFS=,
        #log_info "IFS :${IFS} "  
        #for i in ${#arr[@]};do
            
         #   job=${arr[$i]}
        #    log_info "job : ${job}"
          #  killpid $job
        #done
    	
    fi
    }
    
    
    function startetl(){
        log_info "startetl() first par:$1"
        local jobNameRaw=${1}
    	local portNameRaw=${2-'8099'}
    	local shellNameRaw=${3}
        local jobName=${jobNameRaw}
    	#local sed_cmd1="s/\n/|/g"
    	#local jobNameNew=`echo ${jobName} |sed -r ${sed_cmd1}`
    	log_debug "startetl() jobNameNew:${jobName}"
    
    	local pid=`ps -ef | grep -E "\b${jobName}\b" |grep -v grep |grep -v "queueName"| grep java | grep -v $0 |  awk '{print \$2}'`
    	log_debug "startetl() ps -ef | grep -E " \\b${jobName}\\b" |grep -v grep | grep java | grep -v $0 |  awk '{print \$2}'"
    	#log_debug "startetl() JobName: ${jobName}; pid: ${pid}"
    	if [[ -n ${pid} ]];then
    		tmp=`echo -e "\033[37;31;5mstartetl() JobName: ${jobName},${pid}: is running,please check,eg: etl -c ${jobName} -t yes}\033[39;49;0m"`
    		log_warn "startetl() ${tmp}"
    		#log_warn "startetl() JobName: ${jobName},${pid}: is running,please check,eg: etl -c ${jobName} -t yes}"
    		sleep 2s
    	else
    		log_info "startetl() JobName: ${jobName} is  starting, shell scripts is :${shellNameRaw}"
    		[ -f ${shellNameRaw} ] && source ${shellNameRaw} || { echo "${shellNameRaw} not exists";exit 99; } # 为什么要使用source https://segmentfault.com/a/1190000021616849、https://blog.csdn.net/chen1415886044/article/details/106865154
    		sleep 2s
    		if [ $? -eq 0 ];then
    			pidN=`ps -ef | grep -E "\b${jobName}\b" |grep -v grep |grep -v "queueName"| grep java | grep -v $0 |  awk '{print \$2}'`
    			if [[ -n ${pidN} ]];then
    				log_info "startetl() JobName: ${jobName} start successed,pid:${pidN}, return code: $?"
    			else 
    				log_info "startetl() JobName: ${jobName} start failed,error return code: $?"
    			fi
    		else 
    			log_info "startetl() JobName: ${jobName}启动失败,返回值: $?"
    		fi	
    	fi
    }
    
    
    function startEtlJob() {
    log_debug "startEtlJob() first par:$1"
    local etl_list=${1-'all'}  # default all
    log_debug "startEtlJob() etl_list : ${etl_list}" #获取到所有job
    if [[ -f ${etlListPath} ]];then
    	if [ ${etl_list} == "all" ];then
    		while read line;do
    			time='date +"%F %T"'
    			#处理配置
    			OLD_IFS=$IFS
    			IFS=:
    			arr=($line)
    			name=${arr[0]}
    			port=${arr[1]}
    			startShell=${arr[2]}
    			jobName=`echo $name |awk -F = '{print $2}'`
    			portId=`echo $port |awk -F = '{print $2}'`
    			log_debug "arr2 shell: ${startShell}"
    			startetl ${jobName} ${portId} ${startShell}
    		   
    			#ps -ef | grep $line|grep -v grep | awk '{print $2}'|xagrs kill -9
    		done < "${etlListPath}"
    	else 
    		local sed_cmd1="s/,/|/g" # grep -E 'LoadGlassHst_AR|LoadGlassHst_CF' etl-list-46
    		log_debug "startEtlJob()  JobName:${SJob_NAME} "
    		local cond=`echo ${SJob_NAME} | sed ${sed_cmd1}` #LoadGlassHst_AR1|LoadGlassHst_AR|LoadGlassHst_CF|LoadGlassHst_OC
    		#job 判断配置是否在配置文件中? 比较cond 和 oldjobsCnt的个数
    		#job 判断配置是否在配置文件中? 比较cond 和 oldjobsCnt
    		local desireJobCnt=$(echo ${cond}|awk -F '|' '{print NF}') # 打印出列数和实际grep出的比较如果数量不对,代表输入的job中有没有配置的。这一步也是可以避免攻击.类似于一个简单的校验
    		local oldjobs=`grep -E "\b${cond}\b" ${etlListPath}`
    		local actualJobsCnt=`grep -cE "\b${cond}\b" ${etlListPath}`
    		log_debug "desireJobCnt:${desireJobCnt},actualJobsCnt: ${actualJobsCnt}"
    		if [ ${desireJobCnt} -ne ${actualJobsCnt} ];then log_error "startEtlJob() 输入的job list中存在未配置的job,您输入的jobList:${SJob_NAME},请在${etlListPath}检查是否配置" };fi
    		# TODO: 提示是否继续? Y/N 继续/退出 (default N)
    		local jobs=`echo "${oldjobs}"`
    		if [[ -n ${jobs} ]];then
    			for i in ${jobs};do
    				
    				log_debug "startEtlJob() grep -E ${cond} "${etlListPath}""
    				log_debug "startEtlJob():${i} "
    				local jobNameSingle=`echo ${i} | awk -F = '{print $2}' |awk -F : '{print $1}'`
    				local portNameSingle=`echo ${i} | awk -F = '{print $3}'|awk -F : '{print $1}'`
    				local startShellSingle=`echo ${i}  |awk -F : '{print $3}'`
    				log_debug "遍历jobList,jobName:${i}"
    				startetl ${jobNameSingle} ${portNameSingle} ${startShellSingle}
    			done
    		else 
    			log_error "startEtlJob() ,jobName:${jobNameSingle} 未配置,请确认配置文件"
    			exit 99;
    		fi
    	fi
    else 
    		log_error "startEtlJob() etlListPath file is not exist"
    		exit 99
    fi
    
    }
    
    
    # 检查job是否在运行?
    function checkEtlJob() {
    log_debug "checkEtlJob() first par:$1"
    local etl_list=${1-'all'}  # default all
    local select_op=${2-'no'}  # default all
    local pidC=`wc -l ${etlListPath} | awk -F ' ' '{print $1}'`
    log_debug "checkEtlJob() etl_list : ${etl_list}" #获取到所有job
    if [[ -f ${etlListPath} ]];then
    	if [ ${etl_list} == "all" ];then
    		local count=1
    		while read line;do
    			time='date +"%F %T"'
    			#处理配置
    			OLD_IFS=$IFS
    			IFS=:
    			arr=($line)
    			name=${arr[0]}
    			port=${arr[1]}
    			jobName=`echo $name |awk -F = '{print $2}'`
    			portId=`echo $port |awk -F = '{print $2}'`
    			local pid=`ps -ef | grep -E "\b${jobName}\b" | grep ${portId} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}'`
    			local processInfo=`ps -ef | grep -E "\b${jobName}\b" | grep ${portId} | grep -v grep | grep java | grep -v $0`
    			if [[ -n ${pid} ]];then 
    				log_info "checkEtlJob() jobs共: ${pidC} 支,第:${count}支, jobName:${jobName} is runnping, pid is: ${pid}"
    				log_info "checkEtlJob() processInfo: ${processInfo}"
    				sleep 2s
    				count=$[ $count + 1 ]	
    				if [[ ${select_op} == "yes" ]];then
    					local selectStatement="use ch_qms; select shop,JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = '${jobName}' and VALID_FLG = 'Y'"
    					echo  "---------------checkEtlJob() job info-----------------"
    					tmp=$(mysqlUtil "${selectStatement}")
    					# 黑色=40,红色=41,绿色=42,黄色=43,蓝色=44,洋红=45,青色=46,白色=47
    					echo -e "\e[1;42m${tmp}\e[0m" 
    					echo
    				fi
    			else
    				log_error "checkEtlJob() jobName:${jobName} is not runnping, please check!!!!" 
    			fi
    		done < "${etlListPath}"
    	else 
    		local sed_cmd1="s/,/|/g" # grep -E 'LoadGlassHst_AR|LoadGlassHst_CF' etl-list-46
    		log_debug "checkEtlJob()  JobName:${CJob_NAME} "
    		local cond=`echo ${CJob_NAME} | sed ${sed_cmd1}` #LoadGlassHst_AR1|LoadGlassHst_AR|LoadGlassHst_CF|LoadGlassHst_OC
    		#job 判断配置是否在配置文件中? 比较cond 和 oldjobsCnt的个数
    		#job 判断配置是否在配置文件中? 比较cond 和 oldjobsCnt
    		local desireJobCnt=$(echo ${cond}|awk -F '|' '{print NF}') # 打印出列数和实际grep出的比较如果数量不对,代表输入的job中有没有配置的。这一步也是可以避免攻击.类似于一个简单的校验 ,使用正则中的\b 指定边界精确匹配.
    		local oldjobs=`grep -E "\b${cond}\b" ${etlListPath}`
    		local actualJobsCnt=`grep -cE "\b${cond}\b" ${etlListPath}`
    		log_debug "checkEtlJob() desireJobCnt:${desireJobCnt},actualJobsCnt: ${actualJobsCnt}"
    		if [ ${desireJobCnt} -ne ${actualJobsCnt} ];then log_warn "checkEtlJob() 输入的job list中存在未配置的job,您输入的jobList:${CJob_NAME},请在${etlListPath}检查是否配置" };fi
    		# TODO: 提示是否继续? Y/N 继续/退出 (default N)
    		local jobs=`echo "${oldjobs}"`
    		if [[ -n ${jobs} ]];then
    			local countS=1
    			for i in ${jobs};do	
    				log_debug "checkEtlJob() grep -E ${cond} "${etlListPath}""
    				log_debug "checkEtlJob():${i} "
    				local jobNameSingle=`echo ${i} | awk -F = '{print $2}' |awk -F : '{print $1}'`
    				local portNameSingle=`echo ${i} | awk -F = '{print $3}'|awk -F : '{print $1}'`
    				log_debug "checkEtlJob() 遍历jobList,jobName:${i}, jobNameSingle: ${jobNameSingle}, portNameSingle: ${portNameSingle}"
    				local pidSingle=`ps -ef | grep -E "\b${jobNameSingle}\b" | grep ${portNameSingle} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}'`
    				local processInfo=`ps -ef | grep -E "\b${jobNameSingle}\b" | grep ${portNameSingle} | grep -v grep | grep java | grep -v $0`
    				log_debug "checkEtlJob() ps -ef | grep -E " \\b${jobNameSingle}\\b" | grep ${portNameSingle} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}' "
    			if [[ -n ${pidSingle} ]];then 
    				log_info "checkEtlJob() 您总共输入了: ${actualJobsCnt}支job, 第: ${countS}支, jobName:${jobNameSingle} is runnping, pid is: ${pidSingle}"
    				log_info "checkEtlJob() processInfo: ${processInfo}"
    				sleep 2s
    				countS=$[ $countS + 1 ]
    				if [[ ${select_op} == "yes" ]];then
    					local selectStatementSingle="use ch_qms; select shop,JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = '${jobNameSingle}' and VALID_FLG = 'Y'"
    					sqlPar=${jobNameSingle}
    					log_debug "sqlPar: ${sqlPar}, selectStatementSingle: ${selectStatementSingle}"
    					log_info  "---------------checkEtlJob() job info -----------------"
    					tmp=$(mysqlUtil "${selectStatementSingle}")
    					# 黑色=40,红色=41,绿色=42,黄色=43,蓝色=44,洋红=45,青色=46,白色=47
    					echo -e "\e[1;42m${tmp}\e[0m" | tr '\t' ' ' 
    					echo
    				fi
    			else
    				log_error "checkEtlJob() jobName:${jobNameSingle} is not runnping, please check!!!!" 
    			fi				
    			done
    		else 
    			log_error "checkEtlJob() ,jobName:${jobNameSingle} 未配置,请确认配置文件"
    			exit 99;
    		fi
    	fi
    else 
    		log_error "checkEtlJob() etlListPath file is not exist"
    		exit 99
    fi
    
    }
    
    
    
    
    
    # updateEtlJob 更新job 的配置. who? when? what?  更新操作比较敏感,应该记录下谁在什么时间点都更新了什么?以备不时之需。 此日志可以记录在脚本的执行过程中.
    function updateEtlJob() {
    log_debug "updateEtlJob() first par:$1"
    local etl_list=${1}  # default all
    local select_op=${2-'no'}  # default all
    local pidC=`wc -l ${etlListPath} | awk -F ' ' '{print $1}'`
    local count=1
    log_debug "updateEtlJob() etl_list : ${etl_list}, select_op: ${select_op}" #获取到所有job
    if [[ -f ${etlListPath} ]];then
    	if [ ${etl_list} == "all" ];then
    		while read line;do
    			time='date +"%F %T"'
    			#处理配置
    			OLD_IFS=$IFS
    			IFS=:
    			arr=($line)
    			name=${arr[0]}
    			port=${arr[1]}
    			jobName=`echo $name |awk -F = '{print $2}'`
    			portId=`echo $port |awk -F = '{print $2}'`
    			# 这里应该用sql 从etlconfd中查出来是否存在,但并不是所有的程序都有etlconf配置,例如 hms是用redis key判断? 因此使用进程判断更具有普适性.			
    			local pid=`ps -ef | grep -E "\b${jobName}\b" | grep ${portId} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}'`
    			#if [[ -n ${pid} ]];then 
    				local updateStatement="use ch_qms; UPDATE etl_conf_d  SET RUN_FLG='N',ETL_TIMESTAMP=now() where JOB_NAME  = '${jobName}' and VALID_FLG = 'Q';"
    				mysqlUtil "${updateStatement}"
    				log_info "updateEtlJob()  return: $?"
    				if [ $? -eq 0 ];then
    					#log_info "updateEtlJob() jobs共: ${pidC}支,第: ${count}支, 更新成功,update Sql is : ${updateStatement}, job Info: ----------------job Info----------------"${jobInfo}""				
    					log_info "updateEtlJob() jobs共: ${pidC}支,第: ${count}支, jobName: ${jobName}更新成功~~~,update Sql is : ${updateStatementSingle}"
    					log_debug "updateEtlJob() jobs共: ${pidC}支,第: ${count}支, 更新成功,update Sql is : ${updateStatement}"
    					sleep 2s
    					count=$[ $count + 1 ]
    					local selectStatement="use ch_qms; select shop,JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = '${jobName}' and VALID_FLG = 'Y'"	
    					log_debug "updateEtlJob() selectStatement is :${selectStatement} "
    					if [[ ${select_op} == "yes" ]];then
    						echo  "---------------checkEtlJob() job info-----------------"
    						tmp=$(mysqlUtil "${selectStatement}")
    						# 黑色=40,红色=41,绿色=42,黄色=43,蓝色=44,洋红=45,青色=46,白色=47
    						echo -e "\e[1;42m${tmp}\e[0m"
    						echo
    					fi
    				else 
    					log_error "updateEtlJob() updateStatement 更新失败,jobName: ${jobName}"; exit 99
    				fi				
    				#
    				#log_info "updateEtlJob() jobs共: ${pidC}支,第: ${count}支, jobName:${jobName} is runnping, pid is: ${pid}"
    			#else
    			#	log_error "updateEtlJob() jobName:${jobName} is not runnping, please check!!!!" 
    			#fi
    		done < "${etlListPath}"
    	else 
    		local sed_cmd1="s/,/|/g" # grep -E 'LoadGlassHst_AR|LoadGlassHst_CF' etl-list-46
    		log_info "updateEtlJob()  JobName:${UJob_NAME},select_op: ${select_op} "
    		local cond=`echo ${UJob_NAME} | sed ${sed_cmd1}` #LoadGlassHst_AR1|LoadGlassHst_AR|LoadGlassHst_CF|LoadGlassHst_OC
    		#job 判断配置是否在配置文件中? 比较cond 和 oldjobsCnt的个数
    		#job 判断配置是否在配置文件中? 比较cond 和 oldjobsCnt
    		local desireJobCnt=$(echo ${cond}|awk -F '|' '{print NF}') # 打印出列数和实际grep出的比较如果数量不对,代表输入的job中有没有配置的。这一步也是可以避免攻击.类似于一个简单的校验
    		local oldjobs=`grep -E "\b${cond}\b" ${etlListPath}`
    		local actualJobsCnt=`grep -cE "\b${cond}\b" ${etlListPath}`
    		local select_op_single=${select_op}
    		log_debug "updateEtlJob() desireJobCnt:${desireJobCnt},actualJobsCnt: ${actualJobsCnt}"
    		if [ ${desireJobCnt} -ne ${actualJobsCnt} ];then log_warn "checkEtlJob() 输入的job list中存在未配置的job,您输入的jobList:${UJob_NAME},请在${etlListPath}检查是否配置"  exit 99;fi
    		# TODO: 提示是否继续? Y/N 继续/退出 (default N)
    		log_debug "updateEtlJob() oldjobs: ${oldjobs}"
    		local jobs=`echo "${oldjobs}"`
    		if [[ -n ${jobs} ]];then
    			local countS=1
    			for i in ${jobs};do	
    				log_debug "updateEtlJob() grep -E ${cond} "${etlListPath}""
    				log_debug "updateEtlJob():${i} "
    				local jobNameSingle=`echo ${i} | awk -F = '{print $2}' |awk -F : '{print $1}'`
    				local portNameSingle=`echo ${i} | awk -F = '{print $3}'|awk -F : '{print $1}'`
    				log_debug "updateEtlJob() 遍历jobList,jobName:${i}, jobNameSingle: ${jobNameSingle}, portNameSingle: ${portNameSingle}"
    				local pidSingle=`ps -ef | grep -E "\b${jobNameSingle}\b" | grep ${portNameSingle} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}'`
    				log_debug "updateEtlJob() ps -ef | grep -E ${jobNameSingle} | grep ${portNameSingle} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}'  "
    			#if [[ -n ${pidSingle} ]];then 
    				local updateStatementSingle="use ch_qms;UPDATE etl_conf_d  SET RUN_FLG='N',ETL_TIMESTAMP=now() where JOB_NAME  = '${jobNameSingle}' and VALID_FLG = 'Y';"
    				local selectStatementSingle="use ch_qms;select shop,JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = '${jobNameSingle}' and VALID_FLG = 'Y';"
    				log_debug "updateEtlJob() updateStatementSingle is :${updateStatementSingle},selectStatement is :${selectStatementSingle} "
    				mysqlUtil "${updateStatementSingle}"
    				log_info "updateEtlJob() res:${res} return: $?"
    				if [ $? -eq 0 ];then
    					# log_info " updateEtlJob() jobName: ${jobName} 更新成功~~~, update Sql is : ${updateStatementSingle}"
    					log_info "updateEtlJob() jobs共: ${actualJobsCnt}支,第: ${count}支, jobName: ${jobNameSingle}更新成功~~~,update Sql is : ${updateStatementSingle}"
    					sleep 2s
    					count=$[ $count + 1 ]
    					log_debug "updateEtlJob() etl_list : ${etl_list}, select_op: ${select_op_single}" #获取到所有job
    					if [[ ${select_op} == "yes" ]];then
    						echo  "---------------checkEtlJob() job info-----------------"
    						tmp=$(mysqlUtil "${selectStatementSingle}")
    						# 黑色=40,红色=41,绿色=42,黄色=43,蓝色=44,洋红=45,青色=46,白色=47
    						echo -e "\e[1;42m${tmp}\e[0m"
    						echo
    					fi
    				else 
    					log_error "updateEtlJob() updateStatement 更新失败,jobName: ${jobName}";exit 99
    				fi				
    			#else
    			#	log_error "updateEtlJob() jobName:${jobNameSingle} is not runnping, please check!!!!" 
    			#fi				
    			done
    		else 
    			log_error "updateEtlJob() ,jobName:${jobNameSingle} 未配置,请确认配置文件"
    			exit 99;
    		fi
    	fi
    else 
    		log_error "updateEtlJob() etlListPath file is not exist"
    		exit 99
    fi
    
    }
    
    # 建立软连接指向执行哪个库的脚本
    # updateEtlJob 更新job 的配置. who? when? what?  更新操作比较敏感,应该记录下谁在什么时间点都更新了什么?以备不时之需。 此日志可以记录在脚本的执行过程中.
    
    function selectDB() {
    local db_name=${1-'qms'}  # default all
    log_debug "selectDB() first par:$1"
    path=$(dirname ${etlListPath})
    if [ -f "${path}/etl-list-${db_name}" ];then
    	if [[ ${db_name} == "qms" ]];then
    		ln -nfs  "${path}/etl-list-${db_name}" etl-list
    		log_debug "selectDB() 软连接创建成功! ln -nfs  ${path}/etl-list-${db_name} etl-list "
    	elif [[ ${db_name} == "eda" ]];then	
    		ln -nfs  "${path}/etl-list-${db_name}" etl-list
    		log_debug "selectDB() 软连接创建成功! ln -nfs  ${path}/etl-list-${db_name} etl-list "
    	fi	
    fi
    }
    
    
    # 检查job log
    function getJobLog() {
    log_debug "getJobLog() first par:$1"
    local etl_list=${1-'all'}  # default all
    local select_op=${2-'no'}  # default all
    local pidC=`wc -l ${etlListPath} | awk -F ' ' '{print $1}'`
    log_debug "getJobLog() etl_list : ${etl_list}" #获取到所有job
    if [[ -f ${etlListPath} ]];then
    	if [ ${etl_list} == "all" ];then
    		local count=1
    		while read line;do
    			time='date +"%F %T"'
    			#处理配置
    			OLD_IFS=$IFS
    			IFS=:
    			arr=($line)
    			name=${arr[0]}
    			port=${arr[1]}
    			jobName=`echo $name |awk -F = '{print $2}'`
    			portId=`echo $port |awk -F = '{print $2}'`
    			local pid=`ps -ef | grep -E "\b${jobName}\b" | grep ${portId} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}'`
    			if [[ -n ${pid} ]];then 
    				log_info "getJobLog() jobs共: ${pidC} 支,第:${count}支, jobName:${jobName} is runnping, pid is: ${pid}"
    				sleep 2s
    				count=$[ $count + 1 ]	
    				if [[ ${select_op} == "yes" ]];then
    					local selectStatement="use ch_qms; select shop,JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = '${jobName}' and VALID_FLG = 'Y'"
    					echo  "---------------getJobLog() job info-----------------"
    					tmp=$(mysqlUtil "${selectStatement}")
    					# 黑色=40,红色=41,绿色=42,黄色=43,蓝色=44,洋红=45,青色=46,白色=47
    					echo -e "\e[1;42m${tmp}\e[0m"
    					echo
    				fi
    			else
    				log_error "getJobLog() jobName:${jobName} is not runnping, please check!!!!" 
    			fi
    		done < "${etlListPath}"
    	else 
    		local sed_cmd1="s/,/|/g" # grep -E 'LoadGlassHst_AR|LoadGlassHst_CF' etl-list-46
    		log_debug "getJobLog()  JobName:${CJob_NAME} "
    		local cond=`echo ${CJob_NAME} | sed ${sed_cmd1}` #LoadGlassHst_AR1|LoadGlassHst_AR|LoadGlassHst_CF|LoadGlassHst_OC
    		#job 判断配置是否在配置文件中? 比较cond 和 oldjobsCnt的个数
    		#job 判断配置是否在配置文件中? 比较cond 和 oldjobsCnt
    		local desireJobCnt=$(echo ${cond}|awk -F '|' '{print NF}') # 打印出列数和实际grep出的比较如果数量不对,代表输入的job中有没有配置的。这一步也是可以避免攻击.类似于一个简单的校验 ,使用正则中的\b 指定边界精确匹配.
    		local oldjobs=`grep -E "\b${cond}\b" ${etlListPath}`
    		local actualJobsCnt=`grep -cE "\b${cond}\b" ${etlListPath}`
    		log_debug "getJobLog() desireJobCnt:${desireJobCnt},actualJobsCnt: ${actualJobsCnt}"
    		if [ ${desireJobCnt} -ne ${actualJobsCnt} ];then log_warn "getJobLog() 输入的job list中存在未配置的job,您输入的jobList:${CJob_NAME},请在${etlListPath}检查是否配置" };fi
    		# TODO: 提示是否继续? Y/N 继续/退出 (default N)
    		local jobs=`echo "${oldjobs}"`
    		if [[ -n ${jobs} ]];then
    			local countS=1
    			for i in ${jobs};do	
    				log_debug "getJobLog() grep -E ${cond} "${etlListPath}""
    				log_debug "getJobLog():${i} "
    				local jobNameSingle=`echo ${i} | awk -F = '{print $2}' |awk -F : '{print $1}'`
    				local portNameSingle=`echo ${i} | awk -F = '{print $3}'|awk -F : '{print $1}'`
    				log_debug "遍历jobList,jobName:${i}, jobNameSingle: ${jobNameSingle}, portNameSingle: ${portNameSingle}"
    				local pidSingle=`ps -ef | grep -E "\b${jobNameSingle}\b" | grep ${portNameSingle} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}'`
    				log_debug "getJobLog() ps -ef | grep -E " \\b${jobNameSingle}\\b" | grep ${portNameSingle} | grep -v grep | grep java | grep -v $0 |  awk '{print \$2}' "
    			if [[ -n ${pidSingle} ]];then 
    				log_info "getJobLog() 您总共输入了: ${actualJobsCnt}支job, 第: ${countS}支, jobName:${jobNameSingle} is runnping, pid is: ${pidSingle}"
    				sleep 2s
    				countS=$[ $countS + 1 ]
    				if [[ ${select_op} == "yes" ]];then
    					local selectStatementSingle="use ch_qms; select shop,JOB_NAME,RUN_START_TIMESTAMP,RUN_END_TIMESTAMP, VALID_FLG,RUN_FLG,ETL_TIMESTAMP from etl_conf_d where JOB_NAME = '${jobNameSingle}' and VALID_FLG = 'Y'"
    					sqlPar=${jobNameSingle}
    					log_debug "sqlPar: ${sqlPar}, selectStatementSingle: ${selectStatementSingle}"
    					log_info  "---------------getJobLog() job info -----------------"
    					tmp=$(mysqlUtil "${selectStatementSingle}")
    					# 黑色=40,红色=41,绿色=42,黄色=43,蓝色=44,洋红=45,青色=46,白色=47
    					echo -e "\e[1;42m${tmp}\e[0m"
    					echo
    				fi
    			else
    				log_error "getJobLog() jobName:${jobNameSingle} is not runnping, please check!!!!" 
    			fi				
    			done
    		else 
    			log_error "getJobLog() ,jobName:${jobNameSingle} 未配置,请确认配置文件"
    			exit 99;
    		fi
    	fi
    else 
    		log_error "getJobLog() etlListPath file is not exist"
    		exit 99
    fi
    
    }
    
    
    while getopts "d:s:k:c:t:u:l:vh" opt;do
        case "$opt" in #使用shift命令将参数 一直往左移动,使case一直处理第一个位置的参数
             k) 
    			log_debug "k option, value is :$OPTARG"
             	KJob_NAME=$OPTARG;;  # 保存参数值
             s) 
    			log_debug "s option, value is :$OPTARG"
             	SJob_NAME=$OPTARG;; 
             u) 
    			log_debug "u option, value is :$OPTARG"
             	UJob_NAME=$OPTARG;;  
             c) 
    			log_debug "c option, value is :$OPTARG"
             	CJob_NAME=$OPTARG;;			
             t) 
    			log_debug "t option, value is :$OPTARG"
             	CSJob_NAME=$OPTARG;;						
             d) 
    			log_debug "d option, value is :$OPTARG"
             	DB_NAME=$OPTARG;;	
    	     l)
    			log_debug "log option, value is :$OPTARG"
             	JOB_LOB=$OPTARG;;			
             h) 
             	usage;exit ;;
             v) 
            	echo $VERSION;exit;;
             ?)
                usage;exit 4 #
    			;;
        esac
    done
    shift $(($OPTIND - 1)) # 保存了参数列表中getopts正在处理的参数位置
    log_debug "getopts 正在处理的参数位置 : $(($OPTIND - 1))"
    
    
    count1=1
     
    for para in "$@";do
    	log_debug "para #$count1 : $para"
    	count1=$[ $count1 + 1 ]
    done
    
    #定义公共变量
    #etl list存放位置
    etljobDIR="/home/scripts/etl"
    
    # DB_NAME选择 dbnmae ,实际上就是选择执行那个脚本. 后续对配置中的脚本进行改造并不影响这个程序. 这个程序的使用是工具而不是具体到某一支程序具体的生命周期。
    # 默认就用qms的脚本启动, 因为在那个库启动脚本很重要所以可以考虑必须让用户输入db 选项才可以使用脚本.
    if [[ -n ${DB_NAME} ]];then 
    	case ${DB_NAME} in
    	qms)
    	log_debug "当前所选DB 为: ${DB_NAME}"
    	selectDB ${DB_NAME} ;;
    	eda)
    	log_debug "当前所选DB 为: ${DB_NAME}"
    	selectDB ${DB_NAME} ;;
    	*)
    	log_info "当前所选DB不支持."
    	esac
    fi
    
    # Kjob_NAME 为空,则kill全部job。 否则遍历传入的job list
    if [[ -n ${KJob_NAME} ]];then 
    	if [[ ${KJob_NAME} == "all" ]];then
    		log_info "kill jobs : ${KJob_NAME}"
    		stopetl ${KJob_NAME}
    	else
    		stopetl ${KJob_NAME}
    	fi
    fi
    
    # Sjob_NAME 为空,则start全部job。 否则遍历传入的job list
    # --job_name=LoadGlassHst_AR:--server.port=8131:start-LoadGlassHst_AR.sh
    if [[ -n ${SJob_NAME} ]];then 
    	if [[ ${SJob_NAME} == "all" ]];then
    		log_info "start jobs : ${SJob_NAME}"
    		startEtlJob ${SJob_NAME}
    	else
    		startEtlJob ${SJob_NAME}
    	fi
    fi
    
    # CJob_NAME 检查当前 job是否允许 ,如果传入cs 参数代表从etlconfd 查数据返回.以确认。
    if [[ -n ${CJob_NAME} ]];then 
    	#if [[ ${CJob_NAME} == "all" && ${CSJob_NAME} == "yes" ]];then
    		log_info "start jobs : ${CJob_NAME} ${CSJob_NAME}"
    		checkEtlJob ${CJob_NAME} ${CSJob_NAME}
    	#else
    		#checkEtlJob ${CJob_NAME} 
    	#fi
    fi
    
    # UJob_NAME 更新job
    if [[ -n ${UJob_NAME} ]];then 
    	#if [[ ${UJob_NAME} == "all" && ${CSJob_NAME} == "yes" ]];then
    		log_info "update jobs : ${UJob_NAME}, select_op: ${CSJob_NAME}"
    		updateEtlJob ${UJob_NAME} ${CSJob_NAME}
    	#else
    	#	updateEtlJob ${UJob_NAME}
    	#fi
    fi
    
    #if [ -n ${JOB_LOB} ];then
    #	getJobLog ${JOB_LOB}
    #fi
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    • 303
    • 304
    • 305
    • 306
    • 307
    • 308
    • 309
    • 310
    • 311
    • 312
    • 313
    • 314
    • 315
    • 316
    • 317
    • 318
    • 319
    • 320
    • 321
    • 322
    • 323
    • 324
    • 325
    • 326
    • 327
    • 328
    • 329
    • 330
    • 331
    • 332
    • 333
    • 334
    • 335
    • 336
    • 337
    • 338
    • 339
    • 340
    • 341
    • 342
    • 343
    • 344
    • 345
    • 346
    • 347
    • 348
    • 349
    • 350
    • 351
    • 352
    • 353
    • 354
    • 355
    • 356
    • 357
    • 358
    • 359
    • 360
    • 361
    • 362
    • 363
    • 364
    • 365
    • 366
    • 367
    • 368
    • 369
    • 370
    • 371
    • 372
    • 373
    • 374
    • 375
    • 376
    • 377
    • 378
    • 379
    • 380
    • 381
    • 382
    • 383
    • 384
    • 385
    • 386
    • 387
    • 388
    • 389
    • 390
    • 391
    • 392
    • 393
    • 394
    • 395
    • 396
    • 397
    • 398
    • 399
    • 400
    • 401
    • 402
    • 403
    • 404
    • 405
    • 406
    • 407
    • 408
    • 409
    • 410
    • 411
    • 412
    • 413
    • 414
    • 415
    • 416
    • 417
    • 418
    • 419
    • 420
    • 421
    • 422
    • 423
    • 424
    • 425
    • 426
    • 427
    • 428
    • 429
    • 430
    • 431
    • 432
    • 433
    • 434
    • 435
    • 436
    • 437
    • 438
    • 439
    • 440
    • 441
    • 442
    • 443
    • 444
    • 445
    • 446
    • 447
    • 448
    • 449
    • 450
    • 451
    • 452
    • 453
    • 454
    • 455
    • 456
    • 457
    • 458
    • 459
    • 460
    • 461
    • 462
    • 463
    • 464
    • 465
    • 466
    • 467
    • 468
    • 469
    • 470
    • 471
    • 472
    • 473
    • 474
    • 475
    • 476
    • 477
    • 478
    • 479
    • 480
    • 481
    • 482
    • 483
    • 484
    • 485
    • 486
    • 487
    • 488
    • 489
    • 490
    • 491
    • 492
    • 493
    • 494
    • 495
    • 496
    • 497
    • 498
    • 499
    • 500
    • 501
    • 502
    • 503
    • 504
    • 505
    • 506
    • 507
    • 508
    • 509
    • 510
    • 511
    • 512
    • 513
    • 514
    • 515
    • 516
    • 517
    • 518
    • 519
    • 520
    • 521
    • 522
    • 523
    • 524
    • 525
    • 526
    • 527
    • 528
    • 529
    • 530
    • 531
    • 532
    • 533
    • 534
    • 535
    • 536
    • 537
    • 538
    • 539
    • 540
    • 541
    • 542
    • 543
    • 544
    • 545
    • 546
    • 547
    • 548
    • 549
    • 550
    • 551
    • 552
    • 553
    • 554
    • 555
    • 556
    • 557
    • 558
    • 559
    • 560
    • 561
    • 562
    • 563
    • 564
    • 565
    • 566
    • 567
    • 568
    • 569
    • 570
    • 571
    • 572
    • 573
    • 574
    • 575
    • 576
    • 577
    • 578
    • 579
    • 580
    • 581
    • 582
    • 583
    • 584
    • 585
    • 586
    • 587
    • 588
    • 589
    • 590
    • 591
    • 592
    • 593
    • 594
    • 595
    • 596
    • 597
    • 598
    • 599
    • 600
    • 601
    • 602
    • 603
    • 604
    • 605
    • 606
    • 607
    • 608
    • 609
    • 610
    • 611
    • 612
    • 613
    • 614
    • 615
    • 616
    • 617
    • 618
    • 619
    • 620
    • 621
    • 622
    • 623
    • 624
    • 625
    • 626
    • 627
    • 628
    • 629
    • 630
    • 631
    • 632
    • 633
    • 634
    • 635
    • 636
    • 637
    • 638
    • 639
    • 640
    • 641
    • 642
    • 643
    • 644
    • 645
    • 646
    • 647
    • 648
    • 649
    • 650
    • 651
    • 652
    • 653
    • 654
    • 655
    • 656
    • 657
    • 658
    • 659
    • 660
    • 661
    • 662
    • 663
    • 664
    • 665
    • 666
    • 667
    • 668
    • 669
    • 670
    • 671
    • 672
    • 673
    • 674
    • 675
    • 676
    • 677
    • 678
    • 679
    • 680
    • 681
    • 682
    • 683
    • 684
    • 685
    • 686

    mysql 操作数据库案例

    1,LPL RNG,98,CS
    2,LPL BLG,70,CS
    3,LCK DK,80,CS
    4,LPL JDG,80,EC
    5,LPL WE,50,EC
    6,LES PSG,70,EC
    7,LEC FNC,30,EC
    8,LPL TES,90,AE
    9,Sruthi,89,AE
    10,Andrew,89,AE
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    #!/bin/bash
    #文件名: create_db.sh
    #用途:创建MySQL数据库和数据表
    HOSTIP="10.50.10.163"
    USER="root"
    PASS="chot123"
    mysql -h $HOSTIP  -u $USER -p$PASS <<EOF 2> /dev/null
    CREATE DATABASE scores;
    EOF
    [ $? -eq 0 ] && echo Created DB || echo DB already exist
    mysql -h $HOSTIP -u $USER -p$PASS scores <<EOF 2> /dev/null
    CREATE TABLE scores(
    id int,
    name varchar(100),
    mark int,
    dept varchar(4)
    );
    EOF
    [ $? -eq 0 ] && echo Created table scores || \
    echo Table scores already exist
    mysql -h $HOSTIP -u $USER -p$PASS scores <<EOF
    DELETE FROM scores;
    EOF
    
    #!/bin/bash
    #文件名: write_to_db.sh
    #用途: 从CSV中读取数据并写入MySQL数据库
    HOSTIP="10.50.10.163"
    USER="root"
    PASS="chot123"
    if [ $# -ne 1 ];then
    	echo $0 DATAFILE
    	echo
    	exit 2
    fi
    data=$1
    while read line;
    do
    	oldIFS=$IFS
    	IFS=,
    	values=($line)
    	values[1]="\"`echo ${values[1]} | tr ' ' '#' `\""
    	values[3]="\"`echo ${values[3]}`\""
    	query=`echo ${values[@]} | tr ' #' ', ' `
    	IFS=$oldIFS
    	mysql -h $HOSTIP -u $USER -p$PASS scores <<EOF
    	INSERT INTO scores VALUES($query);
    	EOF
    done< $data
    echo Wrote data into DB
    
    
    
    #!/bin/bash
    #文件名: read_db.sh
    #用途: 读取数据库
    HOSTIP="10.50.10.163"
    USER="root"
    PASS="chot123"
    depts=`mysql -h $HOSTIP -u $USER -p$PASS scores <`
    for d in $depts;
    do
    	echo Department : $d
    	result="`mysql -h $HOSTIP -u $USER -p$PASS scores <<EOF
    	SET @i:=0;
    	SELECT @i:=@i+1 as rank,name,mark FROM scores WHERE dept="$d" ORDER BY
    	mark DESC;
    	EOF`"
    	echo "$result"
    	echo
    done
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73

    坑点1:

    脚本将行中以逗号分
    隔的数据保存到数组中。数组赋值的形式为array=(val1 val2 val3),其中的空格是作为内
    部字段分隔符( Internal Field Separator, IFS)出现的。因为CSV中的文本行使用逗号分隔数据,
    所以只需要将IFS修改成逗号( IFS=,)

    坑点2:

    文本行中以逗号分隔的数据项分别是id、 name、 mark和department。 id和mark是整数,
    而name和department是字符串,必须进行引用。

    坑点3:

    name中可能会包含空格,这样一来就和IFS产生了冲突。因此需要将name中的空格替换成其
    他字符( #),在构建查询语句时再替换回来。
    为了引用字符串,数组中的值要加上 " 作为前缀和后缀。 tr用来将name中的空格替换成#。
    最后,通过将空格替换成逗号,将#替换成空格来构造出查询语句并执行SQL的INSERT语句

    坑点4:

    SET
    @i:=0是一个SQL构件( SQL construct),用来设置变量i=0

    展望

    应用容器化可以带来什么好处?

  • 相关阅读:
    软件测试用例与分类
    Vue.js大师: 构建动态Web应用的全面指南
    基于循环神经网络空中目标意图识别实现(附源码)
    nvm 安装 node.js,可切换版本
    springboot系列(八):mybatis-plus之条件构造器使用手册|超级详细,建议收藏
    H5 富文本快捷输入
    SSH远程访问开发板
    如何有效管理信息技术课堂
    多进程间通信学习之有名管道
    Linux下安装两个版本python
  • 原文地址:https://blog.csdn.net/MyySophia/article/details/126547041