公司没有专业的es团队,只能用mysql存储 数据也不需要持久化
废话不多说,现在开始
因为skywalking 没有包含mysql JDBC的驱动器 需要把JDBC的jar包放到oap-libs目录下, 大家可以参考一下文章MySQL | Apache SkyWalking
关于mysql JDBC的连接驱动可以在一下链接去找对应版本驱动Maven Repository: mysql (mvnrepository.com)
将skywalking application.yml 文件打包到镜像中。您可以使用kubectl cp 创建Skywalking Pod。我将application.yml文件标记为红色,并将其更改为:和我的项目有关。变量在下面的skywalking-server 部署yaml 文件中定义,因此您可以使用默认地址。当然你需要添加驱动线。
这是我的application.yml 文件
# 已获得Apache Software Foundation (ASF) 的一项或多项许可
# 请参阅分发的“贡献者许可协议”通知文件。
# 本作品提供有关版权所有权的附加信息。
# ASF 根据Apache 许可证版本2.0 许可此文件。
除非符合#(“许可证”);否则您不得使用此文件;
# 许可证副本可在以下位置获取:
#
#http://www.apache.org/licenses/LICENSE-2.0
#
# 软件,除非适用法律要求或经我们书面同意。
# 根据许可证分发的任何内容均“按原样”分发。
# 不提供任何形式的明示或暗示的保证或条件。
# 具体语言的权限管理请参见许可证。
# 基于许可证的限制。
集群:
选择器: ${SW_CLUSTER:独立}
独立:
# 确保ZooKeeper 至少为3.5。但它也兼容ZooKeeper 3.4.x,因此请替换ZooKeeper 3.5 或更高版本。
# 使用ZooKeeper 3.4.x 库来存储oap-libs 文件夹。
动物园管理员:
nameSpace: ${SW_NAMESPACE:\’\’}
hostPort: ${SW_CLUSTER_ZK_HOST_PORT:localhost:2181}
#重试策略
baseSleepTimeMs: ${SW_CLUSTER_ZK_SLEEP_TIME:1000} # 重试之间的初始等待时间
maxRetries: ${SW_CLUSTER_ZK_MAX_RETRIES:3} # 最大重试次数
# 启用访问控制列表
EnableACL: ${SW_ZK_ENABLE_ACL:false} # 默认禁用ACL
schema: ${SW_ZK_SCHEMA:digest} # 仅支持摘要模式
表达式: ${SW_ZK_EXPRESSION: 天行: 天行}
kubernetes:
命名空间: ${SW_CLUSTER_K8S_NAMESPACE: 默认}
labelSelector: ${SW_CLUSTER_K8S_LABEL:app=收集器,发布=Skywalking}
uidEnvName: ${SW_CLUSTER_K8S_UID:SKYWALKING_COLLECTOR_UID}
领事:
serviceName: ${SW_SERVICE_NAME:\’SkyWalking_OAP_Cluster\’}
# Consul集群节点,example: 10.0.0.1:8500,10.0.0.2:8500,10.0.0.3:8500
hostPort: ${SW_CLUSTER_CONSUL_HOST_PORT:localhost:8500}
aclToken: ${SW_CLUSTER_CONSUL_ACLTOKEN:\’\’}
etcd:
serviceName: ${SW_SERVICE_NAME:\’SkyWalking_OAP_Cluster\’}
# etcd 集群节点, example: 10.0.0.1:2379,10.0.0.2:2379,10.0.0.3:2379
hostPort: ${SW_CLUSTER_ETCD_HOST_PORT:localhost:2379}
纳科斯:
serviceName: ${SW_SERVICE_NAME:\’SkyWalking_OAP_Cluster\’}
hostPort: ${SW_CLUSTER_NACOS_HOST_PORT:localhost:8848}
# Nacos配置命名空间
命名空间: ${SW_CLUSTER_NACOS_NAMESPACE:\’public\’}
# Nacos认证用户名
用户名: ${SW_CLUSTER_NACOS_USERNAME:\’\’}
密码: ${SW_CLUSTER_NACOS_PASSWORD:\’\’}
# Nacos认证访问密钥
accessKey: ${SW_CLUSTER_NACOS_ACCESSKEY:\’\’}
SecretKey: ${SW_CLUSTER_NACOS_SECRETKEY:\’\’}
核心:
选择器: ${SW_CORE:default}
默认:
# Mixed: 接收代理数据,1级聚合,2级聚合
# Receiver: 接收代理数据,1 级聚合
# Aggregator: 2 级聚合
role: ${SW_CORE_ROLE:Mixed} # 混合/接收器/聚合器
restHost: ${SW_CORE_REST_HOST:0.0.0.0}
休息端口: ${SW_CORE_REST_PORT:12800}
restContextPath: ${SW_CORE_REST_CONTEXT_PATH:/}
restMinThreads: ${SW_CORE_REST_JETTY_MIN_THREADS:1}
restMaxThreads: ${SW_CORE_REST_JETTY_MAX_THREADS:200}
restIdleTimeOut: ${SW_CORE_REST_JETTY_IDLE_TIMEOUT:30000}
restAcceptorPriorityDelta: ${SW_CORE_REST_JETTY_DELTA:0}
restAcceptQueueSize: ${SW_CORE_REST_JETTY_QUEUE_SIZE:0}
gRPCHost: ${SW_CORE_GRPC_HOST:0.0.0.0}
gRPCPort: ${SW_CORE_GRPC_PORT:11800}
maxConcurrentCallsPerConnection: ${SW_CORE_GRPC_MAX_CONCURRENT_CALL:0}
maxMessageSize: ${SW_CORE_GRPC_MAX_MESSAGE_SIZE:0}
gRPCThreadPoolQueueSize: ${SW_CORE_GRPC_POOL_QUEUE_SIZE:-1}
gRPCThreadPoolSize: ${SW_CORE_GRPC_THREAD_POOL_SIZE:-1}
gRPCSslEnabled: ${SW_CORE_GRPC_SSL_ENABLED:false}
gRPCSslKeyPath: ${SW_CORE_GRPC_SSL_KEY_PATH:\’\’}
gRPCSslCertChainPath: ${SW_CORE_GRPC_SSL_CERT_CHAIN_PATH:\’\’}
gRPCSslTrustedCAPath: ${SW_CORE_GRPC_SSL_TRUSTED_CA_PATH:\’\’}
下采样:
-时间
-天
# 设置指标数据的超时时间。超时后,监控数据将自动删除。
EnableDataKeeperExecutor: ${SW_CORE_ENABLE_DATA_KEEPER_EXECUTOR:true} # 未选中时,自动接近指标数据删除。
dataKeeperExecutePeriod: ${SW_CORE_DATA_KEEPER_EXECUTE_PERIOD:5} # 定期运行数据管理程序执行器的频率(以分钟为单位)
RecordDataTTL: ${SW_CORE_RECORD_DATA_TTL:3} # 单位是天
metricsDataTTL: ${SW_CORE_METRICS_DATA_TTL:7} # 单位为天
# 如果为了减少数据库查询而缓存了1分钟的metrics数据,并且OAP集群在这1分钟内发生了变化,
# 该分钟内的指标可能不准确。
启用DatabaseSession: ${SW_CORE_ENABLE_DATABASE_SESSION:true}
topNReportPeriod: ${SW_CORE_TOPN_REPORT_PERIOD:10} # top_n 记录工人报告周期,以分钟为单位
# 其他模型列是在代码中定义的列。逻辑上不需要聚合或进一步查询模型中的这些列。
# 增加内存、OAP 网络和存储负载。
# 但是,一旦激活,用户就可以看到存储实体中的名称,从而可以更轻松地使用第三方工具(例如Kibana-ES)自己查询数据。
activeExtraModelColumns: ${SW_CORE_ACTIVE_EXTRA_MODEL_COLUMNS:false}
# 服务+实例名的最大长度必须小于200
服务名称最大长度: ${SW_SERVICE_NAME_MAX_LENGTH:70}
实例名称最大长度: ${SW_INSTANCE_NAME_MAX_LENGTH:70}
# 服务+端点名称的最大长度必须小于240
端点名称最大长度: ${SW_ENDPOINT_NAME_MAX_LENGTH:150}
# 定义一组可通过GraphQL 搜索的span 标签键。
searchableTracesTags: ${SW_SEARCHABLE_TAG_KEYS:http.method,status_code,db.type,db.instance,mq.queue,mq.topic,mq.broker}
# 定义一组可以通过GraphQL搜索的日志标签键。
searchableLogsTags: ${SW_SEARCHABLE_LOGS_TAG_KEYS:level}
# 用于将指标数据同步更新到存储的线程数。
同步线程: ${SW_CORE_SYNC_THREADS:2}
# 每个同步存储操作支持的最大进程数。如果flash数据数量超过这个值,就会分配给多个核执行。
maxSyncOperationNum: ${SW_CORE_MAX_SYNC_OPERATION_NUM:50000}
存储:
select: ${SW_STORAGE:mysql} #这里选择的是mysql,默认是h2
弹性搜索:
nameSpace: ${SW_NAMESPACE:\’\’}
集群节点: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}
协议: ${SW_STORAGE_ES_HTTP_PROTOCOL:\’http\’}
用户: ${SW_ES_USER:\’\’}
密码: ${SW_ES_PASSWORD:\’\’}
trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:\’\’}
trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:\’\’}
SecretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:\’\’} # 属性格式的机密管理文件包含由第三方工具管理的用户名和密码。
dayStep: ${SW_STORAGE_DAY_STEP:1} # 表示1 分钟/小时/天索引中的天数。
IndexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # 新索引分片编号
IndexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # 新索引副本数量
# 我们在代码中定义超级数据集,例如跟踪段。以下三个设置可以提高es 中存储超大数据时的性能。
superDatasetDayStep: ${SW_SUPERDATASET_STORAGE_DAY_STEP:-1} # 表示超尺寸数据集记录索引中的天数。如果该值小于0,则默认值与dayStep相同。
superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} # 该因子为超级数据集提供更多分片。分片编号=IndexShardsNumber * superDatasetIndexShardsFactor 此因素也会影响Zipkin 和Yeter 跟踪。
superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # 表示超尺寸数据集记录索引中的副本编号。默认值为0。
BulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:1000} # ${SW_STORAGE_ES_BULK_ACTIONS} 对每个请求执行异步批量记录数据
lushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:10} # 每10 秒批量刷新一次,无论请求数量如何
concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # 并发请求数
结果WindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000}
metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000}
段QueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}
profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200}
oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:\'{\\\’analyzer\\\’:{\\\’oap_analyzer\\\’:{\\\’type\\\’:\\\’stop\\\’}}}\’} # OAP 分析器。
oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:\'{\\\’analyzer\\\’:{\\\’oap_log_analyzer\\\’:{\\\’type\\\’:\\\’standard\\\’}}\’} # oap日志分析器可以通过ES分析器定制。配置支持更多语言日志格式,如中文日志、日文日志等。
高级: ${SW_STORAGE_ES_ADVANCED:\’\’}
弹性搜索7:
nameSpace: ${SW_NAMESPACE:\’\’}
集群节点: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}
protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:\”http\”}
trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:\”\”}
trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:\”\”}
dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index.
indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes
indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # Replicas number of new indexes
# Super data set has been defined in the codes, such as trace segments.The following 3 config would be improve es performance when storage super size data in es.
superDatasetDayStep: ${SW_SUPERDATASET_STORAGE_DAY_STEP:-1} # Represent the number of days in the super size dataset record index, the default value is the same as dayStep when the value is less than 0
superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} # This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin and Jaeger traces.
superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # Represent the replicas number in the super size dataset record index, the default value is 0.
user: ${SW_ES_USER:\”\”}
password: ${SW_ES_PASSWORD:\”\”}
secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:\”\”} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool.
bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:1000} # Execute the async bulk record data every ${SW_STORAGE_ES_BULK_ACTIONS} requests
flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:10} # flush the bulk every 10 seconds whatever the number of requests
concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000}
metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000}
segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}
profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200}
oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:\”{\\\”analyzer\\\”:{\\\”oap_analyzer\\\”:{\\\”type\\\”:\\\”stop\\\”}}}\”} # the oap analyzer.
oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:\”{\\\”analyzer\\\”:{\\\”oap_log_analyzer\\\”:{\\\”type\\\”:\\\”standard\\\”}}}\”} # the oap log analyzer. It could be customized by the ES analyzer configuration to support more language log formats, such as Chinese log, Japanese log and etc.
advanced: ${SW_STORAGE_ES_ADVANCED:\”\”}
h2:
driver: ${SW_STORAGE_H2_DRIVER:org.h2.jdbcx.JdbcDataSource}
url: ${SW_STORAGE_H2_URL:jdbc:h2:mem:skywalking-oap-db;DB_CLOSE_DELAY=-1}
user: ${SW_STORAGE_H2_USER:sa}
metadataQueryMaxSize: ${SW_STORAGE_H2_QUERY_MAX_SIZE:5000}
maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20}
numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2}
mysql:
properties:
jdbcUrl: ${SW_JDBC_URL:\”jdbc:mysql://mysqle41fbfaacca.rds.ivolces.com:3306/skywalking?useSSL=false&serverTimezone=UTC&useUnicode=true&characterEncoding=UTF-8\”} #这是mysql地址
dataSource.user: ${SW_DATA_SOURCE_USER:skywalking} #用户名
dataSource.password: ${SW_DATA_SOURCE_PASSWORD:rJnY4m} #密码
dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true}
dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250}
dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048}
dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true}
metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000}
maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20}
numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2}
driver: com.mysql.jdbc.Driver #这行是新加的 用于连接数据库 这是5.x版本的参数
tidb:
properties:
jdbcUrl: ${SW_JDBC_URL:\”jdbc:mysql://localhost:4000/tidbswtest\”}
dataSource.user: ${SW_DATA_SOURCE_USER:root}
dataSource.password: ${SW_DATA_SOURCE_PASSWORD:\”\”}
dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true}
dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250}
dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048}
dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true}
dataSource.useAffectedRows: ${SW_DATA_SOURCE_USE_AFFECTED_ROWS:true}
metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000}
maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20}
numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2}
influxdb:
# InfluxDB configuration
url: ${SW_STORAGE_INFLUXDB_URL:http://localhost:8086}
user: ${SW_STORAGE_INFLUXDB_USER:root}
password: ${SW_STORAGE_INFLUXDB_PASSWORD:}
database: ${SW_STORAGE_INFLUXDB_DATABASE:skywalking}
actions: ${SW_STORAGE_INFLUXDB_ACTIONS:1000} # the number of actions to collect
duration: ${SW_STORAGE_INFLUXDB_DURATION:1000} # the time to wait at most (milliseconds)
batchEnabled: ${SW_STORAGE_INFLUXDB_BATCH_ENABLED:true}
fetchTaskLogMaxSize: ${SW_STORAGE_INFLUXDB_FETCH_TASK_LOG_MAX_SIZE:5000} # the max number of fetch task log in a request
connectionResponseFormat: ${SW_STORAGE_INFLUXDB_CONNECTION_RESPONSE_FORMAT:MSGPACK} # the response format of connection to influxDB, cannot be anything but MSGPACK or JSON.
postgresql:
properties:
jdbcUrl: ${SW_JDBC_URL:\”jdbc:postgresql://localhost:5432/skywalking\”}
dataSource.user: ${SW_DATA_SOURCE_USER:postgres}
dataSource.password: ${SW_DATA_SOURCE_PASSWORD:123456}
dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true}
dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250}
dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048}
dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true}
metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000}
maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20}
numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2}
zipkin-elasticsearch7:
nameSpace: ${SW_NAMESPACE:\”\”}
clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}
protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:\”http\”}
trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:\”\”}
trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:\”\”}
dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index.
indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes
indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # Replicas number of new indexes
# Super data set has been defined in the codes, such as trace segments.The following 3 config would be improve es performance when storage super size data in es.
superDatasetDayStep: ${SW_SUPERDATASET_STORAGE_DAY_STEP:-1} # Represent the number of days in the super size dataset record index, the default value is the same as dayStep when the value is less than 0
superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} # This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin and Jaeger traces.
superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # Represent the replicas number in the super size dataset record index, the default value is 0.
user: ${SW_ES_USER:\”\”}
password: ${SW_ES_PASSWORD:\”\”}
secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:\”\”} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool.
bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:1000} # Execute the async bulk record data every ${SW_STORAGE_ES_BULK_ACTIONS} requests
syncBulkActions: ${SW_STORAGE_ES_SYNC_BULK_ACTIONS:50000} # Execute the sync bulk metrics data every ${SW_STORAGE_ES_SYNC_BULK_ACTIONS} requests
flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:10} # flush the bulk every 10 seconds whatever the number of requests
concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000}
metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000}
segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}
profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200}
oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:\”{\\\”analyzer\\\”:{\\\”oap_analyzer\\\”:{\\\”type\\\”:\\\”stop\\\”}}}\”} # the oap analyzer.
oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:\”{\\\”analyzer\\\”:{\\\”oap_log_analyzer\\\”:{\\\”type\\\”:\\\”standard\\\”}}}\”} # the oap log analyzer. It could be customized by the ES analyzer configuration to support more language log formats, such as Chinese log, Japanese log and etc.
advanced: ${SW_STORAGE_ES_ADVANCED:\”\”}
agent-analyzer:
selector: ${SW_AGENT_ANALYZER:default}
default:
sampleRate: ${SW_TRACE_SAMPLE_RATE:10000} # The sample rate precision is 1/10000. 10000 means 100% sample in default.
slowDBAccessThreshold: ${SW_SLOW_DB_THRESHOLD:default:200,mongodb:100} # The slow database access thresholds. Unit ms.
forceSampleErrorSegment: ${SW_FORCE_SAMPLE_ERROR_SEGMENT:true} # When sampling mechanism active, this config can open(true) force save some error segment. true is default.
segmentStatusAnalysisStrategy: ${SW_SEGMENT_STATUS_ANALYSIS_STRATEGY:FROM_SPAN_STATUS} # Determine the final segment status from the status of spans. Available values are `FROM_SPAN_STATUS` , `FROM_ENTRY_SPAN` and `FROM_FIRST_SPAN`. `FROM_SPAN_STATUS` represents the segment status would be error if any span is in error status. `FROM_ENTRY_SPAN` means the segment status would be determined by the status of entry spans only. `FROM_FIRST_SPAN` means the segment status would be determined by the status of the first span only.
# Nginx and Envoy agents can\’t get the real remote address.
# Exit spans with the component in the list would not generate the client-side instance relation metrics.
noUpstreamRealAddressAgents: ${SW_NO_UPSTREAM_REAL_ADDRESS:6000,9000}
slowTraceSegmentThreshold: ${SW_SLOW_TRACE_SEGMENT_THRESHOLD:-1} # Setting this threshold about the latency would make the slow trace segments sampled if they cost more time, even the sampling mechanism activated. The default value is `-1`, which means would not sample slow traces. Unit, millisecond.
meterAnalyzerActiveFiles: ${SW_METER_ANALYZER_ACTIVE_FILES:spring-sleuth} # Which files could be meter analyzed, files split by \”,\”
log-analyzer:
selector: ${SW_LOG_ANALYZER:default}
default:
lalFiles: ${SW_LOG_LAL_FILES:default}
malFiles: ${SW_LOG_MAL_FILES:\”\”}
event-analyzer:
selector: ${SW_EVENT_ANALYZER:default}
default:
receiver-sharing-server:
selector: ${SW_RECEIVER_SHARING_SERVER:default}
default:
# For Jetty server
restHost: ${SW_RECEIVER_SHARING_REST_HOST:0.0.0.0}
restPort: ${SW_RECEIVER_SHARING_REST_PORT:0}
restContextPath: ${SW_RECEIVER_SHARING_REST_CONTEXT_PATH:/}
restMinThreads: ${SW_RECEIVER_SHARING_JETTY_MIN_THREADS:1}
restMaxThreads: ${SW_RECEIVER_SHARING_JETTY_MAX_THREADS:200}
restIdleTimeOut: ${SW_RECEIVER_SHARING_JETTY_IDLE_TIMEOUT:30000}
restAcceptorPriorityDelta: ${SW_RECEIVER_SHARING_JETTY_DELTA:0}
restAcceptQueueSize: ${SW_RECEIVER_SHARING_JETTY_QUEUE_SIZE:0}
# For gRPC server
gRPCHost: ${SW_RECEIVER_GRPC_HOST:0.0.0.0}
gRPCPort: ${SW_RECEIVER_GRPC_PORT:0}
maxConcurrentCallsPerConnection: ${SW_RECEIVER_GRPC_MAX_CONCURRENT_CALL:0}
maxMessageSize: ${SW_RECEIVER_GRPC_MAX_MESSAGE_SIZE:0}
gRPCThreadPoolQueueSize: ${SW_RECEIVER_GRPC_POOL_QUEUE_SIZE:0}
gRPCThreadPoolSize: ${SW_RECEIVER_GRPC_THREAD_POOL_SIZE:0}
gRPCSslEnabled: ${SW_RECEIVER_GRPC_SSL_ENABLED:false}
gRPCSslKeyPath: ${SW_RECEIVER_GRPC_SSL_KEY_PATH:\”\”}
gRPCSslCertChainPath: ${SW_RECEIVER_GRPC_SSL_CERT_CHAIN_PATH:\”\”}
authentication: ${SW_AUTHENTICATION:\”\”}
receiver-register:
selector: ${SW_RECEIVER_REGISTER:default}
default:
receiver-trace:
selector: ${SW_RECEIVER_TRACE:default}
default:
receiver-jvm:
selector: ${SW_RECEIVER_JVM:default}
default:
receiver-clr:
selector: ${SW_RECEIVER_CLR:default}
default:
receiver-profile:
selector: ${SW_RECEIVER_PROFILE:default}
default:
receiver-zabbix:
selector: ${SW_RECEIVER_ZABBIX:-}
default:
port: ${SW_RECEIVER_ZABBIX_PORT:10051}
host: ${SW_RECEIVER_ZABBIX_HOST:0.0.0.0}
activeFiles: ${SW_RECEIVER_ZABBIX_ACTIVE_FILES:agent}
service-mesh:
selector: ${SW_SERVICE_MESH:default}
default:
envoy-metric:
selector: ${SW_ENVOY_METRIC:default}
default:
acceptMetricsService: ${SW_ENVOY_METRIC_SERVICE:true}
alsHTTPAnalysis: ${SW_ENVOY_METRIC_ALS_HTTP_ANALYSIS:\”\”}
# `k8sServiceNameRule` allows you to customize the service name in ALS via Kubernetes metadata,
# the available variables are `pod`, `service`, f.e., you can use `${service.metadata.name}-${pod.metadata.labels.version}`
# to append the version number to the service name.
# Be careful, when using environment variables to pass this configuration, use single quotes(`\’\’`) to avoid it being evaluated by the shell.
k8sServiceNameRule: ${K8S_SERVICE_NAME_RULE:\”${pod.metadata.labels.(service.istio.io/canonical-name)}\”}
prometheus-fetcher:
selector: ${SW_PROMETHEUS_FETCHER:-}
default:
enabledRules: ${SW_PROMETHEUS_FETCHER_ENABLED_RULES:\”self\”}
kafka-fetcher:
selector: ${SW_KAFKA_FETCHER:-}
default:
bootstrapServers: ${SW_KAFKA_FETCHER_SERVERS:localhost:9092}
partitions: ${SW_KAFKA_FETCHER_PARTITIONS:3}
replicationFactor: ${SW_KAFKA_FETCHER_PARTITIONS_FACTOR:2}
enableMeterSystem: ${SW_KAFKA_FETCHER_ENABLE_METER_SYSTEM:false}
enableLog: ${SW_KAFKA_FETCHER_ENABLE_LOG:false}
isSharding: ${SW_KAFKA_FETCHER_IS_SHARDING:false}
consumePartitions: ${SW_KAFKA_FETCHER_CONSUME_PARTITIONS:\”\”}
kafkaHandlerThreadPoolSize: ${SW_KAFKA_HANDLER_THREAD_POOL_SIZE:-1}
kafkaHandlerThreadPoolQueueSize: ${SW_KAFKA_HANDLER_THREAD_POOL_QUEUE_SIZE:-1}
receiver-meter:
selector: ${SW_RECEIVER_METER:default}
default:
receiver-otel:
selector: ${SW_OTEL_RECEIVER:-}
default:
enabledHandlers: ${SW_OTEL_RECEIVER_ENABLED_HANDLERS:\”oc\”}
enabledOcRules: ${SW_OTEL_RECEIVER_ENABLED_OC_RULES:\”istio-controlplane\”}
receiver_zipkin:
selector: ${SW_RECEIVER_ZIPKIN:-}
default:
host: ${SW_RECEIVER_ZIPKIN_HOST:0.0.0.0}
port: ${SW_RECEIVER_ZIPKIN_PORT:9411}
contextPath: ${SW_RECEIVER_ZIPKIN_CONTEXT_PATH:/}
jettyMinThreads: ${SW_RECEIVER_ZIPKIN_JETTY_MIN_THREADS:1}
jettyMaxThreads: ${SW_RECEIVER_ZIPKIN_JETTY_MAX_THREADS:200}
jettyIdleTimeOut: ${SW_RECEIVER_ZIPKIN_JETTY_IDLE_TIMEOUT:30000}
jettyAcceptorPriorityDelta: ${SW_RECEIVER_ZIPKIN_JETTY_DELTA:0}
jettyAcceptQueueSize: ${SW_RECEIVER_ZIPKIN_QUEUE_SIZE:0}
receiver_jaeger:
selector: ${SW_RECEIVER_JAEGER:-}
default:
gRPCHost: ${SW_RECEIVER_JAEGER_HOST:0.0.0.0}
gRPCPort: ${SW_RECEIVER_JAEGER_PORT:14250}
receiver-browser:
selector: ${SW_RECEIVER_BROWSER:default}
default:
# The sample rate precision is 1/10000. 10000 means 100% sample in default.
sampleRate: ${SW_RECEIVER_BROWSER_SAMPLE_RATE:10000}
receiver-log:
selector: ${SW_RECEIVER_LOG:default}
default:
query:
selector: ${SW_QUERY:graphql}
graphql:
path: ${SW_QUERY_GRAPHQL_PATH:/graphql}
alarm:
selector: ${SW_ALARM:default}
default:
telemetry:
selector: ${SW_TELEMETRY:none}
none:
prometheus:
host: ${SW_TELEMETRY_PROMETHEUS_HOST:0.0.0.0}
port: ${SW_TELEMETRY_PROMETHEUS_PORT:1234}
sslEnabled: ${SW_TELEMETRY_PROMETHEUS_SSL_ENABLED:false}
sslKeyPath: ${SW_TELEMETRY_PROMETHEUS_SSL_KEY_PATH:\”\”}
sslCertChainPath: ${SW_TELEMETRY_PROMETHEUS_SSL_CERT_CHAIN_PATH:\”\”}
configuration:
selector: ${SW_CONFIGURATION:none}
none:
grpc:
host: ${SW_DCS_SERVER_HOST:\”\”}
port: ${SW_DCS_SERVER_PORT:80}
clusterName: ${SW_DCS_CLUSTER_NAME:SkyWalking}
period: ${SW_DCS_PERIOD:20}
apollo:
apolloMeta: ${SW_CONFIG_APOLLO:http://localhost:8080}
apolloCluster: ${SW_CONFIG_APOLLO_CLUSTER:default}
apolloEnv: ${SW_CONFIG_APOLLO_ENV:\”\”}
appId: ${SW_CONFIG_APOLLO_APP_ID:skywalking}
period: ${SW_CONFIG_APOLLO_PERIOD:5}
zookeeper:
period: ${SW_CONFIG_ZK_PERIOD:60} # Unit seconds, sync period. Default fetch every 60 seconds.
nameSpace: ${SW_CONFIG_ZK_NAMESPACE:/default}
hostPort: ${SW_CONFIG_ZK_HOST_PORT:localhost:2181}
# Retry Policy
baseSleepTimeMs: ${SW_CONFIG_ZK_BASE_SLEEP_TIME_MS:1000} # initial amount of time to wait between retries
maxRetries: ${SW_CONFIG_ZK_MAX_RETRIES:3} # max number of times to retry
etcd:
period: ${SW_CONFIG_ETCD_PERIOD:60} # Unit seconds, sync period. Default fetch every 60 seconds.
group: ${SW_CONFIG_ETCD_GROUP:skywalking}
serverAddr: ${SW_CONFIG_ETCD_SERVER_ADDR:localhost:2379}
clusterName: ${SW_CONFIG_ETCD_CLUSTER_NAME:default}
consul:
# Consul host and ports, separated by comma, e.g. 1.2.3.4:8500,2.3.4.5:8500
hostAndPorts: ${SW_CONFIG_CONSUL_HOST_AND_PORTS:1.2.3.4:8500}
# Sync period in seconds. Defaults to 60 seconds.
period: ${SW_CONFIG_CONSUL_PERIOD:60}
# Consul aclToken
aclToken: ${SW_CONFIG_CONSUL_ACL_TOKEN:\”\”}
k8s-configmap:
period: ${SW_CONFIG_CONFIGMAP_PERIOD:60}
namespace: ${SW_CLUSTER_K8S_NAMESPACE:default}
labelSelector: ${SW_CLUSTER_K8S_LABEL:app=collector,release=skywalking}
nacos:
# Nacos Server Host
serverAddr: ${SW_CONFIG_NACOS_SERVER_ADDR:127.0.0.1}
# Nacos Server Port
port: ${SW_CONFIG_NACOS_SERVER_PORT:8848}
# Nacos Configuration Group
group: ${SW_CONFIG_NACOS_SERVER_GROUP:skywalking}
# Nacos Configuration namespace
namespace: ${SW_CONFIG_NACOS_SERVER_NAMESPACE:}
# Unit seconds, sync period. Default fetch every 60 seconds.
period: ${SW_CONFIG_NACOS_PERIOD:60}
# Nacos auth username
username: ${SW_CONFIG_NACOS_USERNAME:\”\”}
password: ${SW_CONFIG_NACOS_PASSWORD:\”\”}
# Nacos auth accessKey
accessKey: ${SW_CONFIG_NACOS_ACCESSKEY:\”\”}
secretKey: ${SW_CONFIG_NACOS_SECRETKEY:\”\”}
exporter:
selector: ${SW_EXPORTER:-}
grpc:
targetHost: ${SW_EXPORTER_GRPC_HOST:127.0.0.1}
targetPort: ${SW_EXPORTER_GRPC_PORT:9870}
health-checker:
selector: ${SW_HEALTH_CHECKER:-}
default:
checkIntervalSeconds: ${SW_HEALTH_CHECKER_INTERVAL_SECONDS:5}
configuration-discovery:
selector: ${SW_CONFIGURATION_DISCOVERY:default}
default:
disableMessageDigest: ${SW_DISABLE_MESSAGE_DIGEST:false}
receiver-event:
selector: ${SW_RECEIVER_EVENT:default}
default:
以下是 Dockerfile文件 我用的是 apache/skywalking-oap-server:8.5.0-es7的镜像 mysql-connector-java-5.1.49.jar 是我下载的 JDBC的 jar包
FROM wms-prod-cn-beijing.cr.volces.com/skywalking/skywalking-oap-server:8.5.0-es7
COPY application.yml /skywalking/config/application.yml
COPY mysql-connector-java-5.1.49.jar /skywalking/oap-libs/
以下是 部署 skywalking-server 的 deployment文件 不晓得怎么保持原格式 ,自动左对齐了
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: skywalking
name: skywalking
namespace: skywalking
spec:
replicas: 2
selector:
matchLabels:
app: skywalking
template:
metadata:
labels:
app: skywalking
spec:
nodeSelector:
app: skywalking
containers:
– name: skywalking
image: wms-prod-cn-beijing.cr.volces.com/skywalking/skywalking-oap-server:skywalking-jdbc
imagePullPolicy: IfNotPresent
ports:
– containerPort: 12800
name: http
protocol: TCP
– containerPort: 11800
name: grpc
protocol: TCP
env:
– name: SW_STORAGE
value: \”mysql\”
– name: SW_JDBC_URL
value: \”jdbc:mysql://mysqle41fbfaaceca.rds.ivolces.com:3306/skywalking?useSSL=false&serverTimezone=UTC&useUnicode=true&characterEncoding=UTF-8\”
– name: SW_DATA_SOURCE_USER
value: \”skywalking\”
– name: SW_DATA_SOURCE_PASSWORD
value: \”rOJnY4wm\”
—
apiVersion: v1
kind: Service
metadata:
name: skywalking
labels:
app: skywalking
namespace: skywalking
spec:
type: ClusterIP
ports:
– name: http
port: 12800
protocol: TCP
targetPort: 12800
– name: grpc
port: 11800
protocol: TCP
targetPort: 11800
selector:
app: skywalking
以下是 skywalking-ui的 deployment文件
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: skywalking-ui
name: skywalking-ui
namespace: skywalking
spec:
replicas: 1
selector:
matchLabels:
app: skywalking-ui
template:
metadata:
labels:
app: skywalking-ui
spec:
containers:
– env:
– name: SW_OAP_ADDRESS
value: http://skywalking.skywalking.svc.cluster.local:12800 #server的 svc地址加端口
image: apache/skywalking-ui:8.5.0
imagePullPolicy: IfNotPresent
name: skywalking-ui
ports:
– containerPort: 8080
name: http
protocol: TCP
上面的yaml文件没有格式了 我上传到文章顶部了 关于文件
大家想要公网访问可以给 skywalking-ui加个 loadbalancer 映射到公网
好了可以访问了 我的是java程序
接入skywalking需要把skywalking agent的包放到所需监控的pod镜像中
#以上关于基于k8s 部署skywalking 用mysql存储的相关内容来源网络仅供参考,相关信息请以官方公告为准!
原创文章,作者:CSDN,如若转载,请注明出处:https://www.sudun.com/ask/93861.html