git.io/bash-tools
1000多個DevOps Shell腳本和高級bash環境。
快速,先進的系統工程,自動化,API,較短的CLI等。
大量用於許多GitHub存儲庫,數十個Dockerhub構建(Dockerfiles)和600多個CI構建。
/path/endpoint即可快速查詢流行的API.bashrc + .bash.d/*.sh別名,功能,著色,動態git&shell行為增強功能,安裝的自動路徑以及Python,Perl,Perl,Ruby,Ruby,Nodejs,Nodejs,Golang,Linux跨Linux分佈和Mac。請參閱.bash.d/readme.mdinstall/ - 包含許多用於流行開源軟件的安裝腳本,並從GitHub版本中直接下載二進制下載configs/ - 包含許多通用技術的點配置setup/ - 包含設置腳本,軟件包列表,額外的配置,Mac OS X設置等。.bash.d/ - 交互式庫lib/ - 腳本和CI庫另請參閱:其他語言中的類似DevOps存儲庫
Hari Sekhon
雲與大數據承包商,英國
(前盧德拉,前Hortonworks顧問)
(歡迎您在LinkedIn上與我聯繫)
要引導程序,安裝軟件包並鏈接到您的shell配置文件以繼承所有配置,請執行:
curl -L https://git.io/bash-bootstrap | sh.bashrc / .bash_profile中以自動繼承所有技術的所有.bash.d/*.sh環境增強功能(請參見下面的清單).* config dotfiles to $HOME for git,vim,top,htop,屏幕,屏幕,tmux,editorconfig,ansible,postgresql .psqlrc等(僅當它們不存在時,因此與您自己的配置沒有衝突)要僅安裝包裝依賴項以運行腳本,只需cd到GIT Clone Directory並運行make :
git clone https://github.com/HariSekhon/DevOps-Bash-tools bash-tools
cd bash-tools
make make install設置您的外殼配置文件以源別源。有關更多安裝/卸載選項,請參見下面的單個設置零件。
.gitconfig , .vimrc , .screenrc , .tmux.conf , .toprc , .gitignore ....bashrc , .bash.d/ Interactive Library, lib/腳本庫.psqlrc頂級互補和configs/目錄:
.* - 許多常見軟件的點conf文件,例如。 Advanced .vimrc , .gitconfig ,Massive .gitignore , .editorconfig , .screenrc , .tmux.conf等。.vimrc包含許多很棒的VIM調整,以及用於覆蓋許多不同文件類型的熱鍵,包括Python,Perl,Bash / bash / bash / shell,Dockerfiles,JSON,JSON,YAML,XML,XML,CSV,INI / properties / properties / properties Files,LDAP LDIF等,而無需離開Editor!.screenrc花式屏幕配置,包括高級顏色欄,大型歷史記錄,重新鍵入,自動空白等。.tmux.conf.gitconfig高級git配置.gitignore頂級.bashrc和.bash.d/ Directory:
.bashrc .bash.d/*.sh.bash.d/*.shmake bash link link .bashrc / .bash_profile $HOME .*lib/*.sh -bash實用程序庫,充滿了Docker,環境,CI檢測(Travis CI,Jenkins等),端口和HTTP URL可用性內容檢查等功能。從我所有的所有其他GitHub存儲庫中獲取,以使設置Dockerized測試更容易。install/install_*.sh各種易於使用的安裝腳本,用於以下常見技術 bin/目錄:
login.sh登錄到主要雲平台如果在環境中找到其憑據,例如AWS,GCP,Azure,Github等CLIS ... DOCKER註冊表:Dockerhub,GHCR,ECR,GCR,GCR,GAR,ACR,ACR,GITLAB,GITLAB,QUAY ... ...clean_caches.sh清理OS軟件包和編程語言緩存 - 可用於節省空間或減少Docker圖像大小delete_duplicate_files.sh用(n)後綴刪除重複文件,通常是由Web瀏覽器下載引起的,在給定或當前目錄中。檢查它們是匹配的Basename文件的確切重複,沒有(n)後綴,並具有完全相同的校驗和安全性。提示要刪除每個文件。要自動刪除刪除,請yes | delete_duplicate_files.sh 。這是清理您的~/Downloads目錄的快速方法,可以放置您的用戶crontabdownload_url_file.sh使用沒有clobber的WGET從URL下載文件,並繼續支持,或者使用原子替換捲曲以避免比賽條件。由github/github_download_release_file.sh , github_download_release_jar.sh使用,然後install/download_*_jar.shcurl_auth.sh通過自動加載oauth2 / jwt api代幣或用戶名和密碼來縮短curl命令,從環境變量或通過RAM文件描述符通過互動的星級密碼提示提示,以避免將它們放在命令行上(這會將其放置在過程列表中的憑據或OS審核日誌文件中)。許多其他相鄰的API查詢腳本使用find_duplicate_files*.sh在給定目錄樹中按大小和/或校驗和/或校驗和校驗樹查找重複的文件。校驗和僅在已經具有匹配字節計數的文件上完成,以提高效率find_broken_links.sh查找帶有延遲的破裂鏈接,以避免絆倒防禦find_broken_symlinks.sh查找指向不存在文件/目錄的損壞的符號鏈接find_lock.sh試圖通過在提示之前和之後查找文件列表的快照,以查找在給定或當前工作目錄中使用的鎖緊文件,您應該打開/關閉應用程序http_duplicate_urls.sh在給定的網頁中查找重複的URLldapsearch.sh通過從環境變量推斷開關來縮短ldapsearch命令ldap_user_recurse.sh / ldap_group_recurse.sh向上重複Active Directory LDAP用戶以查找所有母體組,或向下查找所有嵌套用戶(可用於調試LDAP集成和基於組的權限))linux_distro_versions.sh快速返回給定Linux發行版的主要版本列表diff_line_threshold.sh比較兩個文件與線計數差異閾值,以確定它們是否完全不同。用於避免覆蓋不僅僅是更新的文件,而是完全不同的文件organize_downloads.sh移動$ $HOME/Downloads/ $HOME/Downloads目錄中眾所周知的擴展名的文件,以大於1週copy_to_clipboard.sh在Linux或Mac上複製stdin或String arg to System剪貼板paste_from_clipboard.sh從系統剪貼板粘貼到Linux或Mac上的Stdoutpaste_diff_settings.sh拍攝剪貼板更改之前和之後的快照,並分散它們以顯示配置更改processes_ram_sum.sh總結所有與GB中給定的正則撥號與一個小數點相匹配的所有進程的RAM使用pldd.sh Linux上的PRARS /proc顯示了程序PID正在使用的.so加載的動態共享庫。運行時等效於經典靜態ldd命令,並且因為系統pldd命令通常無法連接到一個過程random_select.sh隨機選擇給定的ARG之一。對於採樣,運行大型測試套件等的隨機子集有用。random_number.sh在兩個整數參數之間打印一個隨機整數(包含)random_string.sh打印給定長度的隨機字母數字字符串shields_embed_logo.sh基本64編碼給定的圖標文件或URL並打印logo=... URL參數您需要添加shields.io徽章URLshred_file.sh在刪除該文件之前,將文件覆蓋7次以防止恢復敏感信息shred_free_space.sh覆蓋自由空間,以防止已刪除的文件恢復敏感信息split.sh將大文件拆分為n個零件(默認為CPU內核的數量)以並行化對其的操作ssl_get_cert.sh獲取一個遠程host:port服務器的SSL CERT,以您可以管道,保存和在本地使用,例如在Java Trustsorts中ssl_verify_cert.sh驗證遠程SSL證書(戰鬥測試了更多功能富版本check_ssl_cert.pl ,都存在於高級Nagios插件庫中)ssl_verify_cert_by_ip.sh驗證特定IP地址上的SSL證書,可用於測試CDN的SSL源地址,例如Cloudflare代理源,然後啟用端到端的SSL全部圖片模式,以供端到端,或參見Kubernetes Ingresses(另請參見curl_k8s_ingress.sh )ttygif.sh使用ttyrec和ttygif從運行終端命令創建一個gif,然後打開結果的gifasciinema.sh使用asciinema和agg創建一個從運行終端命令中創建GIF,然後打開所得的GIFterminalizer.sh使用終端器從運行終端命令創建一個GIF,然後打開所得的GIFurlencode.sh / urldecode.sh url在命令行,管道等上快速編碼 /解碼urlextract.sh從給定的字符串arg,文件或標準輸入中提取URLurlopen.sh打開作為ARG給出的URL,或從STDIN或給定文件中找到的第一個URL。使用系統的默認瀏覽器vagrant_hosts.sh生成/etc/hosts從Vagrantfile輸出vagrant_total_mb.sh計算Vagrantfile中VMS的RAM另請參見Linux和Mac的知識基礎。
Mac自動化腳本以自動化Mac UI和設置
bin/目錄:
mac_diff_settings.sh在UI設置更改的快照之前和之後進行更改,並使它們易於找到defaults鍵以添加到setup/mac_settings.sh以保存設置mac_iso_to_usb.sh將給定的ISO文件轉換為USB啟動圖像,並將其刻錄到給定或檢測到的插入USB驅動器上copy_to_clipboard.sh在Linux或Mac上複製stdin或String arg to System剪貼板paste_from_clipboard.sh從系統剪貼板粘貼到Linux或Mac上的Stdoutpaste_diff_settings.sh拍攝剪貼板更改之前和之後的快照,並分散它們以顯示配置更改applescript/ Directory:
keystrokes.sh發送n擊鍵組合mouse_clicks.sh將n鼠標單擊組合到屏幕坐標的順序get_mouse_coordinates.sh打印當前的鼠標坐標 - 知道要傳遞到上面的腳本mouse_clicks_remote_desktop.sh切換到Microsoft Remote桌面,等待10秒鐘,然後單擊每分鐘一次鼠標以防止屏幕保護程序打開。解決Active Directory組策略的解決方法,這些策略無法禁用屏幕保護程序。將鼠標指向沒有鼠標點擊效果的區域,即終端的CMD-TAB並運行此get_frontmost_process_title.scpt檢測最前面的窗口set_frontmost_process.scpt切換以將給定的應用帶到前景,以發送擊鍵 /鼠標單擊到它browser_get_default.scpt獲取可將默認配置的瀏覽器以格式傳遞到applescript(上述腳本)is_screen_locked.py檢測屏幕是否鎖定以停止發送擊鍵或鼠標單擊is_screensaver_running.scpt檢測屏幕保護程序是否正在運行以停止發送擊鍵或鼠標單擊screensaver_activate.scpt激活屏幕保護程序另請參見Harisekhon/知識庫中的Mac頁面。
monitoring/目錄:
dump_stats.sh將通用命令輸出轉輸給本地tarball中的文本文件。對於收集供應商支持案例的支持信息有用grafana_api.sh與身份驗證查詢Grafana APIlog_timestamp_large_intervals.sh找到其時間戳間隔超過給定秒數的日誌線,並輸出這些日誌線,其中最後一個時間戳和當前時間戳之間的差異。有助於查找從ci/cd日誌等日誌文件中花費很長時間的操作prometheus.sh在本地啟動Prometheus,如果在$PATH中找到的話,下載它prometheus_docker.sh使用docker-compose在Docker中開始Prometheusprometheus_node_exporter.sh啟動Prometheus node_exporter本地下載,如果在$PATH中找不到的話ssh_dump_stats.sh使用ssh和dump_stats.sh將公共命令輸出從遠程服務器轉移到本地TARBALL。對供應商支持案例有用ssh_dump_logs.sh使用ssh將日誌從服務器轉移到本地文本文件,以上傳到供應商支持案例請參閱Harisekhon/grafana,Prometheus,opentsDB,InfluxDB等的DOC頁面。
mysql/ , postgres/ , sql/ and bin/目錄:
sqlite.sh一鍵式sqlite,啟動帶有“奇努克”數據庫的sqlite3 shell已加載mysql*.sh mysql腳本:mysql.sh縮短mysql命令,通過從兩個標準環境變量(例如$MYSQL_TCP_PORT mysql_tcp_port , $DBI_USER , $USER (請參閱doc)和其他常見的環境變量(如$MYSQL_HOST / $HOST / $ spositys $MYSQL_USER , $MYSQL_PWD , $MYSQL_PASSWORD , $PASSWORD , $MYSQL_DATABASE / $DATABASEmysql_foreach_table.sh對每個表執行SQL查詢,在每個迭代中替換{db}和{table} 。 select count(*) from {table}mysql_*.sh使用mysql.sh進行行計數,迭代每個表或輸出清潔數據庫和表的清潔列表以快速腳本mysqld.sh一鍵mysql,靴子docker容器 +插入mysql shell,其中/sql腳本安裝在容器中,以易於採購。 source /sql/<name>.sql 。可選地加載樣本“ Chinook”數據庫mariadb.sh一鍵式Mariadb,Boots Docker Container +落入mysql Shell,並將/sql腳本安裝在容器中,以易於採購。 source /sql/<name>.sql 。可選地加載樣本“ Chinook”數據庫postgres*.sh / psql.sh postgresql腳本:postgres.sh一鍵式的postgresql,靴子碼頭容器 +滴入psql shell,其中/sql腳本安裝在容器中,以易於採購。 i /sql/<name>.sql 。可選地加載樣本“ Chinook”數據庫psql.sh使用來自環境變量的自動裝滿開關來縮短psql命令,使用標準的郵政為環境變量(例如$PG* (請參閱doc)以及其他常見的環境變量,以及$POSTGRESQL_HOST / $POSTGRES_HOST / $POSTGRESQL_USER / $HOST $POSTGRES_USER / $USER $POSTGRESQL_PASSWORD $ $POSTGRES_PASSWORD / $PASSWORD , $POSTGRESQL_DATABASE / $POSTGRES_DATABASE / $DATABASEpostgres_foreach_table.sh對每個表執行SQL查詢,替換{db} , {schema}和{table}在每個迭代中。 select count(*) from {table}postgres_*.sh使用psql.sh進行行計數的各種腳本,迭代每個表或輸出清潔數據庫列表,圖架和表格,用於快速腳本checks/check_sqlfluff.sh遞歸迭代給定或當前目錄中發現的所有SQL代碼文件,並運行SQLFluff Linter對它們,從每個路徑/fileName/Extension推斷出不同的SQL方言aws/目錄:
aws_*.sh :aws_profile.sh切換為ARG給出的AWS配置文件,或提示用戶使用方便的交互式菜單列表供AWS配置文件選擇 - 當您有很多AWS工作配置文件時,很有用aws_cli_create_credential.sh使用管理權限(或其他組或策略)為CI/CD或CLI創建一個AWS服務帳戶用戶,創建AWS訪問密鑰,創建AWS訪問密鑰,保存憑據CSV,甚至保存shell Exports commants and aws cordenters cordentials cordentials文件配置以配置您的環境以開始使用它。有用的技巧避免CLI Reeuth每天aws sso login 。aws_terraform_create_credential.sh使用Terraform Cloud的管理員權限或其他CI/CD系統來創建一個AWS Terraform服務帳戶,以運行Terraform Plan並應用Terraform計劃,因為沒有CI/CD Systems可以與AWS SSO工作流程一起使用。將訪問密鑰存儲為CSV和打印Shell Export命令和憑據文件配置如上.envrc-aws .envrc.envrc-kubernetes設置隔離到當前外殼的kubectl上下文,以防止外殼和腳本之間的競賽條件和腳本之間,而原本天真地更改全局~/.kube/config上下文aws_sso_ssh.sh將本地AWS SSO身份驗證彈出窗口(如果尚未進行認證),然後SCP的最新結果~/.aws/sso/cache/ file到遠程服務器和SSH在那裡,以便您可以輕鬆地將AWS CLI或kubectl遠程使用,而無需遠程和遠程copporter aws so coppory so coppory aws sosse sso soken aws soken aws soken aws,aws_terraform_create_s3_bucket.sh創建一個Terraform S3存儲桶,用於存儲後端狀態,鎖定公共訪問,啟用版本管理,加密和鎖定電源用戶的角色,並選擇通過任何給定的用戶/group/group/group Arns,以確保安全性ARN,以確保安全性ARN,以確保安全性ARNaws_terraform_create_dynamodb_table.sh在DynamoDB中創建一個供S3後端使用的Terraform鎖定表,以及可以應用於較少特權帳戶的自定義IAM策略aws_terraform_create_all.sh運行以上所有操作,另外還將自定義DynamoDB IAM策略應用於用戶,以確保帳戶是否較低,是否具有較低的特權,它仍然可以獲取Terraform鎖定(對於GitHub Actions Enivertry有用(對於GitHub Actions Enivertry secret secret secret of Read用戶僅在不需要批准的情況下生成Terraform計劃就可以生成Terraform計劃,而無需批准)aws_terraform_iam_grant_s3_dynamodb.sh創建IAM策略以訪問帶有terraform-state或tf-state任何S3存儲桶和DynamodB表中的名稱,並將其附加到給定的用戶。可用於有限的權限CI/CD帳戶,該帳戶運行Terraform計劃。在github動作中拉請求aws_account_summary.sh - prints AWS account summary in key = value pairs for easy viewing / grepping of things like AccountMFAEnabled , AccountAccessKeysPresent , useful for checking whether the root account has MFA enabled and no access keys, comparing number of users vs number of MFA devices etc. (see also check_aws_root_account.py in Advanced Nagios Plugins)aws_billing_alarm.sh當您在給定閾值以上收費時,創建一個CloudWatch計費警報和SNS主題,以通過電子郵件發送給您。這通常是您要在帳戶上做的第一件事aws_budget_alarm.sh在您開始產生預測的預測費用超過80%的預測費用和90%的實際用途時,創建一個AWS預算帳單帳單和SNS主題,並通過訂閱給您發送電子郵件。這通常是您要在帳戶上做的第一件事aws_batch_stale_jobs.sh列出了給定隊列中N小時大於n小時的AWS批處理作業aws_batch_kill_stale_jobs.sh找到並殺死在給定隊列中n小時大的AWS批處理作業aws_cloudfront_distribution_for_origin.sh返回分佈的AWS CloudFront Arn,其中包含給定的子字符串。對於快速找到通過Cloudfront暴露的私人S3存儲桶的權限所需的雲端ARN很有用aws_cloudtrails_cloudwatch.sh列出雲跟踪及其最後一次交付給CloudWatch日誌(應該是最新的)aws_cloudtrails_event_selectors.sh列出雲步道及其活動選擇器以檢查每個人至少具有一個事件選擇器aws_cloudtrails_s3_accesslogging.sh列出了雲跟踪存儲桶及其訪問記錄前綴和目標存儲桶。檢查S3訪問記錄已啟用aws_cloudtrails_s3_kms.sh列出雲步道及其S3桶是否為KMSaws_cloudtrails_status.sh列出雲跟踪狀態 - 如果登錄,多區域和日誌文件驗證aws_config_all_types.sh列表AWS配置記錄器,檢查所有資源類型(應為true),並包含全局資源(應為true)aws_config_recording.sh列表AWS配置記錄器,其錄製狀態(應該是真實),其最後一個狀態(應該是成功)aws_csv_creds.sh從CSV文件打印AWS憑據作為外殼導出語句。 Useful to quickly switch your shell to some exported credentials from a service account for testing permissions or pipe to upload to a CI/CD system via an API (eg. jenkins_cred_add*.sh , github_actions_repo*_set_secret.sh , gitlab_*_set_env_vars.sh , circleci_*_set_env_vars.sh , bitbucket_*_set_env_vars.sh , terraform_cloud_*_set_vars.sh , kubectl_kv_to_secret.sh )。支持新的用戶和新訪問密鑰CSV文件格式。aws_codecommit_csv_creds.sh從CSV文件中打印AWS codeCommit git憑據作為外殼導出語句。類似的用例和鏈接如上aws_ec2_instance_name_to_id.sh從具有額外安全檢查的實例名稱中查找EC2實例ID,該名稱僅返回一個實例ID,並且在該實例ID上進行反向查找以重新驗證其匹配名稱。如果傳遞實例ID,則將其返回,以便方便。相鄰腳本使用aws_ec2_instances.sh列出AWS EC2實例,其DNS名稱和狀態易於讀取表輸出aws_ec2_terminate_instance_by_name.sh終止AWS EC2實例aws_ec2_create_ami_from_instance.sh從EC2實例中創建AWS EC2 AMI,並等待它可用於使用aws_ec2_clone_instance.sh通過從原始創建AMI,然後從AMI啟動與原始實例相同的設置的新實例來啟動新實例,從而克服AWS EC2實例。對於在單獨的EC2實例上測試有風險的事情,例如服務器管理員的Tableau恢復aws_ec2_amis.sh列表AWS EC2 AMI屬於您的帳戶,易於閱讀的表輸出aws_ec2_ami_ids.sh列表AWS EC2 AMI ID,每行1個,用於在相鄰腳本中使用,這些腳本創建映射表並將AMI ID轉換為庫存腳本中的名稱aws_info_ec2*.shaws_ec2_ebs_*.sh -aws ec2 ebs腳本:aws_ec2_ebs_volumes.sh列表EC2實例及其EBS卷aws_ec2_ebs_volumes_unattached.sh列出表格格式的未連接的EBS卷aws_ecr_*.sh -aws ecr docker映像管理腳本:aws_ecr_docker_login.sh驗證Docker到AWS ECR,從當前AWS帳戶ID和區域推斷ECR註冊表aws_ecr_docker_build_push.sh構建Docker映像,並將其推向ECR,不僅使用latest Docker標籤,還可以將當前的Git Hashref和Git標籤推送到ECRaws_ecr_list_repos.sh列出ECR存儲庫及其Docker Image Mutability以及是否啟用了圖像掃描aws_ecr_list_tags.sh列出給定的ECR Docker映像的所有標籤aws_ecr_newest_image_tags.sh列出具有latest創建日期的給定ecr docker映像的標籤aws_ecr_alternate_tags.sh列出給定的ECR docker image:tag (使用arg <image>:latest查看哪個版本 / build hashRef / date標籤已將標籤標記為latest )aws_ecr_tag_image.sh標記ECR圖像,帶有另一個標籤,而無需拉和推aws_ecr_tag_image_by_digest.sh與上述相同,但標記通過摘要找到的ECR圖像(更準確地作為現有標籤的參考可以是移動目標)。對於恢復已被標記的圖像有用aws_ecr_tag_latest.sh標記給定的ECR docker image:tag為latest ,而無需拉或推動docker imageaws_ecr_tag_branch.sh標記給定的ECR image:tag而無需拉或推動docker imageaws_ecr_tag_datetime.sh標記給定的ECR Docker映像,其創建日期和UTC Timestamp(將其上傳到ECR時)而無需拉或推動Docker Imageaws_ecr_tag_newest_image_as_latest.sh找到並標記給定的ECR Docker映像的latest構建,而無需拉或推動Docker Imageaws_ecr_tags_timestamps.sh列出給定的ECR docker圖像的所有標籤及其時間戳aws_ecr_tags_old.sh列出給定的ECR Docker圖像的標籤比N天大的標籤aws_ecr_delete_old_tags.sh給定的ECR Docker映像的刪除比N天大的標籤。列出圖像:要刪除的標籤並提示確認安全aws_foreach_profile.sh在AWS Cliv2中配置的所有名為profiles的AWS上執行模板命令,替換每次迭代中的{profile} 。與其他腳本結合使用,以實現強大功能,審核,設置等。例如。 aws_kube_creds.sh將在所有環境中配置kubectl配置aws_foreach_region.sh針對啟用了頻繁帳戶的每個AWS區域執行模板命令,替換了每次迭代中的{region} 。與AWS CLI或腳本相結合以在各個地區找到資源aws_iam_*.sh -aws iam腳本:aws_iam_password_policy.sh打印aws密碼策略在key = value對的值配對 / grepping( aws_harden_password_policy.sh使用之前和之後,以顯示差異)aws_iam_harden_password_policy.sh根據CIS基礎,加強AWS密碼策略基礎建議建議aws_iam_replace_access_key.sh替換非電流IAM訪問密鑰(不活躍,未使用,更長的時間,自所使用的時間或明確給定的鍵),輸出新鍵作為shell Export語句(可用於管道上的aws_csv_creds.sh列出的同一工具)aws_iam_policies_attached_to_users.sh查找直接附加到用戶(反最佳練習)而不是組的AWS IAM政策aws_iam_policies_granting_full_access.sh查找AWS IAM政策授予完全訪問權限(反最佳練習)aws_iam_policies_unattached.sh列出了未附加的AWS IAM政策aws_iam_policy_attachments.sh - finds all users, groups and roles where a given IAM policy is attached, so that you can remove all these references in your Terraform code and avoid this error Error: error deleting IAM policy arn:aws:iam::***:policy/mypolicy: DeleteConflict: Cannot delete a policy attached to entities.aws_iam_policy_delete.sh首先處理所有先決條件的步驟,刪除IAM策略,並將所有用戶,組和角色分離aws_iam_generate_credentials_report_wait.sh生成AWS IAM憑據報告aws_iam_users.sh列出您的IAM用戶aws_iam_users_access_key_age.sh打印AWS用戶訪問密鑰狀態和年齡(另請參見DevOps Python工具中的aws_users_access_key_age.py ,可以按年齡和狀態過濾)aws_iam_users_access_key_age_report.sh打印AWS用戶使用批量憑據報告訪問密鑰狀態和年齡(許多用戶更快)aws_iam_users_access_key_last_used.sh打印aws用戶訪問鍵上次使用的日期aws_iam_users_access_key_last_used_report.sh與上面使用批量憑據報告相同(許多用戶適用於許多用戶)aws_iam_users_last_used_report.sh列表AWS用戶密碼/訪問密鑰最後使用的日期aws_iam_users_mfa_active_report.sh列表AWS用戶啟用密碼並啟用MFA狀態aws_iam_users_without_mfa.sh列出了啟用密碼但沒有MFA的AWS用戶aws_iam_users_mfa_serials.sh列出AWS用戶MFA序列號(區分虛擬與硬件MFA)aws_iam_users_pw_last_used.sh列表AWS用戶及其密碼上次使用日期aws_ip_ranges.sh使用IP範圍API獲取給定區域和/或服務的所有AWS IP範圍aws_info*.sh :aws_info_all_profiles.sh使用aws_foreach_profile.sh調用aws_info.sh為所有AWS配置文件aws_info.sh列出當前或指定的AWS帳戶配置文件中的AWS部署資源aws_info_ec2.sh列出了當前AWS帳戶中部署的AWS EC2實例aws_info_ec2_csv.sh列出了當前AWS帳戶中引用的CSV格式的AWS EC2實例aws_info_ec2_all_profiles_csv.sh列表AWS EC2實例在所有配置的AWS profiles中為其配置的區域中引用的CSV格式aws_eks_cloudwatch_logs.sh啟用和獲取aws aws eks master logs通過cloudwatchaws_eks_ssh_dump_logs.sh從EKS Worker Nodes EC2 VM中獲取系統日誌(例如,供應商的支持調試請求)aws_kms_key_rotation_enabled.sh列出AWS KMS鍵以及是否啟用了密鑰旋轉aws_kube_creds.sh自動載荷當前的 - Profile和-gregion中的所有AWS EKS簇憑證,因此您的Kubectl已準備就緒aws_kubectl.sh使用配置隔離式運行kubectl命令安全地固定在給定的AWS EKS群集上,以避免並發競賽條件aws_logs_*.sh在過去的N小時內(默認為24小時)中一些有用的日誌查詢:aws_logs_batch_jobs.sh列出AWS批處理作業提交請求及其呼叫者aws_logs_ec2_spot.sh列出AWS EC2 Spot Flex創建請求,其呼叫者和第一個標籤值aws_logs_ecs_tasks.sh列出AWS ECS任務運行請求,其呼叫者和作業定義aws_meta.sh -AWS EC2元數據API查詢快捷方式。另請參閱具有更多功能的官方EC2-Metadata shell腳本aws_nat_gateways_public_ips.sh列出了所有NAT網關的公共IP。有用給客戶通過防火牆允許網鉤或類似電話aws_rds_list.sh列錶帶有選擇字段的RDS實例 - 名稱,狀態,引擎,AZ,實例類型,存儲aws_rds_open_port_to_my_ip.sh將安全組添加到RDS DB實例中,以將其本機數據庫SQL端口打開到您的公共IP地址aws_rds_get_version.sh快速檢索RDS數據庫的版本,以了解使用install/download_*_jdbc.sh下載哪個JDBC JAR版本aws_route53_check_ns_records.sh檢查AWS AWS AWS ROUTE 53公共託管區域NS服務器已委派在公共DNS層次結構中,並且沒有Rogue NS服務器委派與53區域配置不匹配的Rogue NS服務器aws_sso_accounts.sh列出了當前SSO用戶可以訪問的所有AWS SSO帳戶aws_sso_configs.sh生成所有AWS SSO配置的所有AWS SSO帳戶,當前記錄的SSO用戶可以訪問aws_sso_configs_save.sh保存由aws_sso_configs.sh ~/.aws/config的AWS SSO配置aws_sso_config_duplicate_sections.sh列出了使用相同的sso_account_id的重複AWS SSO配置部分。對於重複構圖,包含手工製作和自動生成的aws_sso_configs.sh的混合aws_sso_config_duplicate_profile_names.sh列出了使用相同sso_account_id的重複aws sso配置配置名稱aws_sso_env_creds.sh檢索環境導出命令的格式,以復製到Terraform Cloud(例如Terraform Cloud)的格式aws_sso_role_arn.sh打印當前身份驗證的AWS SSO用戶在IAM策略可用格式中的角色aws_sso_role_arns.sh打印所有AWS SSO角色arns in Iam策略可用格式aws_profile_config_add_if_missing.sh讀取來自stdin的AWS配置構型塊,並將它們附加到~/.aws/config (如果找不到配置文件部分)aws_profile_generate_direnvs.sh生成包含config.ini和.envrc的子目錄,用於給定文件中的每個AWS配置文件或$AWS_CONFIG_FILE或~/.aws/config 。從aws_sso_configs.sh中獲取大型生成的AWS config.ini ,然後將其分成direnvs的子目錄aws_s3_bucket.sh創建一個S3存儲桶,阻止公共訪問,啟用版本管理,加密,並選擇通過安全性策略鎖定任何給定的用戶/組/角色ARNS(例如,停止訪問像Terraform State這樣的敏感存儲桶的電源用戶)aws_s3_buckets_block_public_access.sh阻止公共訪問一個或多個給定的s3存儲桶或包含存儲桶名稱的文件,每行一個aws_s3_account_block_public_access.sh阻止S3公共訪問在AWS帳戶級別aws_s3_check_buckets_public_blocked.sh迭代每個S3存儲桶,並檢查其通過策略完全阻止了公共訪問權限。並行加速aws_s3_check_account_public_blocked.sh檢查S3公共訪問在AWS帳戶級別被阻止aws_s3_sync.sh從文件列表中同步多個AWS S3 URL。驗證S3 URL,源和目的地列表長度匹配,並選擇該路徑後綴匹配,以防止在錯誤的目的地路徑上逐一噴塗數據aws_s3_access_logging.sh列出AWS S3存儲桶及其訪問記錄狀態aws_s3_delete_bucket_with_versions.sh刪除包含所有版本的存儲桶。謹慎使用!aws_spot_when_terminated.sh執行命令當運行此腳本的AWS EC2實例被通知點終止,作為閂鎖機制,可以在啟動後任何時間設置aws_sqs_check.sh向AWS SQS隊列發送測試消息,檢索其檢查然後通過收據句柄ID刪除它aws_sqs_delete_message.sh從給定的AWS SQS隊列中刪除1-10條消息(幫助清除測試消息)aws_ssm_put_param.sh從命令行參數或非迴聲提示中讀取值,並將其保存到AWS Systems Manager參數存儲中。對於上傳密碼而不將密碼公開的情況很有用aws_secret*.sh AWS Secrets Manager腳本:aws_secret_list.sh返回秘密列表,每行1aws_secret_add.sh從命令行參數或非迴聲提示中讀取一個值,並將其保存到Secrets Manager。對於上傳密碼而不將密碼公開的情況很有用aws_secret_add_binary.sh -Base64編碼給定文件的內容,並將其保存到Secrets Manager作為二進制秘密。對於上傳諸如QR代碼屏幕截圖之類的內容,用於將MFA共享到恢復管理員帳戶aws_secret_update.sh讀取命令行參數或非迴聲提示的值,並更新給定的Secrets Manager Secret。對於更新密碼而不將密碼公開的信息很有用aws_secret_update_binary.sh -base64編碼給定文件的內容,並更新給定的Secrets Manager Secret。對於更新root帳戶的QR代碼屏幕截圖很有用aws_secret_get.sh從Secrets Manager那裡獲得給定秘密的秘密值,根據可用eksctl_cluster.sh下載EKSCTL並創建AWS EKS KUBERNETES群集另請參見AWS知識基本。
gcp/目錄:
gcp_*.sh / gce_*.sh / gke_*.sh / gcr_*.sh / bigquery_*.sh :sh:.envrc-gcp .envrc適用於本地殼環境,只是為了避免通過天真地更改~/.config/gcloud/active_config全局gcloud config造成的種族條件.envrc-kubernetes設置隔離到當前外殼的kubectl上下文,以防止外殼和腳本之間的競賽條件和腳本之間,而原本天真地更改全局~/.kube/config上下文gcp_terraform_create_credential.sh為Terraform創建一個具有完整權限的服務帳戶,創建和下載憑證鍵JSON,甚至打印export GOOGLE_CREDENTIALS命令,以配置您的環境以立即開始使用Terraform。為每個項目運行一次,並與Direnv結合使用,以快速輕鬆管理多個GCP項目gcp_ansible_create_credential.sh在當前項目上創建一個具有權限的Ansible服務帳戶,創建和下載憑據鍵json並打印環境變量以立即使用它gcp_cli_create_credential.sh創建一個帶有所有所有者權限的GCLOUD SDK CLI服務帳戶,對所有項目,創建和下載憑證鍵JSON,甚至打印export GOOGLE_CREDENTIALS命令以配置您的環境以開始使用它。避免每天都必須重新申請gcloud auth login 。gcp_spinnaker_create_credential.sh創建一個帶有當前項目權限的大三角器服務帳戶,創建和下載憑據鍵json,甚至打印halyard cli Configuration命令使用它來使用它gcp_info.sh當前項目中已部署資源的巨大Google雲清單 - 雲SDK信息加上以下所有內容(檢測啟用哪些服務可以查詢):gcp_info_compute.sh GCE虛擬機實例,應用引擎實例,雲功能,GKE簇,所有kubernetes對象跨所有GKE簇(請參閱下面的kubernetes_info.sh ,有關更多詳細信息)gcp_info_storage.sh下面的雲SQL信息,另外:雲存儲存儲桶,雲文件,雲存儲器redis,bigtable簇和實例,數據存儲索引索引gcp_info_cloud_sql.sh雲SQL實例,是否啟用了備份以及每個實例上的所有數據庫gcp_info_cloud_sql_databases.sh列出了每個雲SQL實例中的數據庫。包含在gcp_info_cloud_sql.sh中gcp_info_cloud_sql_backups.sh列出了每個雲SQL實例的備份,並列出其日期和狀態。 gcp_info_cloud_sql.sh中未包含。另請參見gcp_sql_export.sh進一步向下,以獲取更耐用的GCSgcp_info_cloud_sql_users.sh列出每個運行雲SQL實例的用戶。不包含在gcp_info_cloud_sql.sh中,但對審計用戶很有用gcp_info_networking.sh VPC網絡,地址,代理,子網,路由器,路由,路線,VPN網關,VPN隧道,預訂,防火牆規則,轉發規則,轉發規則,雲DNS託管區域和驗證的域名和驗證的域名gcp_info_bigdata.sh所有區域中的DataProc群集和作業gcp_info_tools.sh雲源存儲庫,雲構建,所有主要存儲庫中的容器註冊表圖像( gcr.io , us.gcr.io , eu.gcr.io , asia.gcr.io ),部署管理器部署部署gcp_info_auth_config.sh驗證配置,組織和當前配置gcp_info_projects.sh項目名稱和IDgcp_info_services.sh啟用服務和APIgcp_service_apis.sh列出了所有可用的GCP服務,API及其狀態(已啟用/禁用),並提供is_service_enabled()函數在整個相鄰腳本中使用的函數,以避免錯誤,僅顯示相關的啟用服務gcp_info_accounts_secrets.sh iam服務帳戶,秘密經理秘密gcp_info_all_projects.sh與上述相同,但對於所有檢測到的項目gcp_foreach_project.sh在所有gcp項目上執行模板命令,在每次迭代中替換{project_id}和{project_name} ( gcp_info_all_projects.sh使用)以調用gcp_info.sh )gcp_find_orphaned_disks.sh列表一個或多個GCP項目的孤立磁盤(未連接到任何計算實例)gcp_secret*.sh Google Secret Manager腳本:gcp_secret_add.sh從命令行參數或非迴聲提示中讀取值,並將其保存到GCP Secrets Manager。對於上傳密碼而不將密碼公開的情況很有用gcp_secret_add_binary.sh通過BASE 64首先編碼它,將二進製文件上傳到GCP Secrets Manager。可用於上傳QR代碼屏幕截圖。對於上傳諸如QR代碼屏幕截圖之類的內容,用於將MFA共享到恢復管理員帳戶gcp_secret_update.sh讀取命令行參數或非迴聲提示的值,並更新給定的GCP Secrets Manager Secret。對於上傳密碼而不將密碼公開的情況很有用gcp_secret_get.sh找到給定GCP Secret Manager Secret的最新版本並返回其值。相鄰腳本使用gcp_secret_label_k8s.sh標記給定的現有GCP秘密,帶有當前的kubectl群集名稱和名稱空間,供以後使用gcp_secrets_to_kubernetes.shgcp_secrets_to_kubernetes.sh將GCP秘密加載到1-1映射中的Kubernetes秘密。可以指定帶有標籤kubernetes-cluster和kubernetes-namespace的秘密或自動加載的秘密列表,與當前的kubectl上下文(首先, kcd到正確的名稱空間,請參見.bash.d/kubernetes )。另請參見kubernetes_get_secret_values.sh以調試加載的實際值。另請參閱我的Kubernetes repo中的密封秘密 /外部秘密gcp_secrets_to_kubernetes_multipart.sh從多個GCP Secrets創建一個Kubernetes秘密(用於將private.pem和public.pem放入同一秘密中,以顯示在pods中使用的pods for pods for pods for pods for使用)。另請參閱我的Kubernetes repo中的密封秘密 /外部秘密gcp_secrets_labels.sh列出了GCP秘密及其標籤,每行適合快速視圖或外殼管道gcp_secrets_update_lable.sh更新當前項目中的所有GCP秘密標籤鍵=帶有新標籤值的值gcp_service_account_credential_to_secret.sh創建GCP服務帳戶並將憑證鍵導出到GCP Secret Manager(可用於舞台或與gcp_secrets_to_kubernetes.sh )gke_*.sh -Google kubernetes引擎腳本gke_kube_creds.sh自動載荷當前 /給定 /所有項目中的所有GKE群集憑據gke_kubectl.sh使用config隔離設備安全地固定在給定的GKE群集上,以避免並發競賽條件gke_firewall_rule_cert_manager.sh為給定的GKE群集的Masters創建GCP防火牆規則,以訪問Cert Manager Admission Webhook(Auto -Ditermines Master CIDR,網絡和目標標籤)gke_firewall_rule_kubeseal.sh為給定的GKE群集的大師創建GCP防火牆規則,以訪問kubeseal的密封秘密控制器(自動確定)gke_nodepool_nodes.sh通過kubectl標籤(fast)在當前GKE群集上給定的nodepool中列出所有節點gke_nodepool_nodes2.sh與上述通過gcloud SDK相同(慢,迭代實例組)gke_nodepool_taint.sh污點/污染當前群集上給定的gke nodepool中的所有節點(請參閱kubectl_node_taints.sh ,以了解一種快速的方法來查看Taints)gke_nodepool_drain.sh將所有節點排在給定的nodepool中(例如,退役或重建節點池,例如使用不同的污點)gke_persistent_volumes_disk_mappings.sh列出了gke kubernetes持久量到GCP持久磁盤名稱,以及PVC和名稱空間,在調查時很有用,調整PVS等。gcr_*.sh -Google Container註冊表腳本:gcr_list_tags.sh列出給定GCR Docker映像的所有標籤gcr_newest_image_tags.sh列出具有最新創建日期的給定GCR Docker映像的標籤(可以使用此標籤來確定要標記的圖像版本latest )gcr_alternate_tags.sh列出給定GCR docker image:tag (使用arg <image>:latest查看哪個版本 / build hashref / date標籤已將標籤標記為latest )gcr_tag_latest.sh標記給定的GCR Docker image:tag為latest ,而無需拉或推Docker imagegcr_tag_branch.sh標記給定的GCR docker image:tag而無需拉或推動碼頭圖像gcr_tag_datetime.sh標記具有創建日期的給定GCR Docker映像,而UTC時間戳(當它被Google Cloud Build上傳或創建時)而無需拉或推動Docker Imagegcr_tag_newest_image_as_latest.sh查找並標記給定的GCR Docker映像的最新構建為latest而無需拉或推動Docker Imagegcr_tags_timestamps.sh列出給定的GCR Docker映像的所有標籤及其時間戳gcr_tags_old.sh列出給定GCR Docker圖像的標籤比N天大的標籤gcr_delete_old_tags.sh給定的GCR Docker映像的刪除比N天大的標籤。列出圖像:要刪除的標籤並提示確認安全gcp_ci_build.sh CI/CD的腳本模板觸發Google cloud構建以構建帶有額外DateTime的Docker容器圖像和最新標記gcp_ci_deploy_k8s.sh使用Kustomize將CI/CD的腳本模板部署到GKE Kubernetesgce_*.sh Google計算引擎腳本:gce_foreach_vm.sh在當前GCP項目中為每個GCP VM實例運行一個匹配給定名稱/IP正則的命令gce_host_ips.sh打印所有或gce vms的ips和主機名,用於 /etc /hostsgce_ssh.sh運行gcloud compute ssh為VM,而自動確定其區域以覆蓋任何繼承的區域配置,並使通過VMS進行腳本迭代更加易於腳本gcs_ssh_keyscan.sh ~/.ssh/known_hosts gce_host_ips.shgce_meta.sh簡單的腳本來查詢來自虛擬機內的GCE元數據APIgce_when_preempted.sh gce vm preemption閂鎖腳本 - 可以在任何時間設置一個或多個命令時執行以執行先發執行gce_is_preempted.sh gce vm返回true/false,如果被搶占,則可以從其他腳本中callinggce_instance_service_accounts.sh列出了GCE VM實例名稱及其服務帳戶gcp_firewall_disable_default_rules.shgcp_firewall_risky_rules.sh列出啟用的風險GCP防火牆規則,允許從0.0.0.0.0/0允許流量gcp_sql_*.sh -Cloud SQL腳本:gcp_sql_backup.sh創建雲SQL備份gcp_sql_export.sh創建雲SQL導出到GCSgcp_sql_enable_automated_backups.sh啟用自動化的每日雲SQL備份gcp_sql_enable_point_in_time_recovery.sh啟用時間恢復使用寫入logsgcp_sql_proxy.sh啟動一個雲SQL代理到所有云SQL實例,以通過本地插座快速方便直接psql / mysql訪問。如有必要,安裝雲SQL代理gcp_sql_running_primaries.sh列出主運行雲實例gcp_sql_service_accounts.sh列出雲SQL實例服務帳戶。對於復制IAM的授予權限很有用(例如,用於SQL導出備份的存儲對象創建到GCS)gcp_sql_create_readonly_service_account.sh創建一個服務帳戶,具有僅讀取權限的雲SQL EG。運行向GCS的導出備份gcp_sql_grant_instances_gcs_object_creator.sh授予最小gcs objectCreator在存儲桶上對主雲sql實例的出口實例gcp_cloud_schedule_sql_exports.sh創建Google Cloud Scheduler作業,通過PubSub觸發雲功能,以將Cloud SQL Exports用於當前GCP Project中的所有云SQL實例,以將雲SQL導出到GCSbigquery_*.sh bigquery腳本:bigquery_list_datasets.sh列出當前GCP項目中的bigquery數據集bigquery_list_tables.sh在給定數據集中列出了bigquery表bigquery_list_tables_all_datasets.sh列出了當前GCP項目中所有數據集的表格bigquery_foreach_dataset.sh執行每個數據集的模板命令bigquery_foreach_table.sh在給定數據集中為每個表執行一個模板命令bigquery_foreach_table_all_datasets.sh執行當前GCP項目中每個數據集中每個表的模板命令bigquery_table_row_count.sh獲取給定表的行計數bigquery_tables_row_counts.sh獲取給定數據集中所有表的行計數bigquery_tables_row_counts_all_datasets.sh獲取當前GCP項目中所有數據集中所有表的行計數bigquery_generate_query_biggest_tables_across_datasets_by_row_count.sh生成一個bigquery sql查詢,以查找排名前10的最大桌子bigquery_generate_query_biggest_tables_across_datasets_by_size.sh生成一個bigquery sql查詢,以根據大小找到前10個最大的表格gcp_service_account*.sh :gcp_service_account_credential_to_secret.sh創建GCP服務帳戶並將憑證鍵導出到GCP Secret Manager(可用於舞台或與gcp_secrets_to_kubernetes.sh )gcp_service_accounts_credential_keys.sh列出所有服務帳戶憑據鍵和到期日期,可以grep 9999-12-31T23:59:59Z查找非振興密鑰gcp_service_accounts_credential_keys_age.sh列出所有服務帳戶憑證鍵年齡gcp_service_accounts_credential_keys_expired.sh列出了過期的服務帳戶憑證鍵,應在需要時刪除並重新創建gcp_service_account_members.sh列出了有權使用任何服務帳戶的所有成員和角色。對於查找GKE工作負載身份映射有用gcp_iam_*.sh :gcp_iam_roles_in_use.sh列出了當前或所有項目中使用的GCP IAM角色gcp_iam_identities_in_use.sh列出了當前或所有項目中使用gcp_iam_roles_granted_to_identity.sh列出了當前或所有項目中授予與正則(用戶/組/serviceaccounts)的身份的GCP IAM角色gcp_iam_roles_granted_too_widely.sh列出了已授予AllauthenticatiCatedUsers的GCP IAM角色,甚至在一個或所有項目中均為AllauthenticatiCatedUsers,甚至更糟的Allusers(未經深思)gcp_iam_roles_with_direct_user_grants.sh列出了GCP IAM角色,這些角色已直接授予用戶,違反了最佳實踐組的管理gcp_iam_serviceaccount_members.sh列出具有使用每個GCP服務帳戶的權限的成員gcp_iam_serviceaccounts_without_permissions.sh找到沒有IAM許可的服務帳戶,可用於在90天未使用的許可清理後檢測過時的服務帳戶gcp_iam_workload_identities.sh列表GKE Workload Identity Integrations,使用gcp_iam_serviceaccount_members.shgcp_iam_users_granted_directly.sh列出了GCP IAM用戶,這些用戶直接違反了基於最佳實踐的管理gcs_bucket_project.sh找到給定存儲桶屬於使用GCP存儲API的GCP項目gcs_curl_file.sh使用GCP存儲API從給定的存儲桶和路徑中檢索GCS文件的內容。對於啟動外殼管道或從其他腳本調用另請參見GCP知識基本。
kubernetes/目錄:
.envrc-kubernetesaws/eksctl_cluster.sh使用eksctl迅速插入AWS EKS群集,並具有一些明智的默認值kubernetes_info.sh巨大的kubernetes清單清單清單在當前群集 / kube上下文中所有名稱空間部署的資源列表:kubectl.sh運行kubectl命令使用config隔離安全地固定在給定上下文中,以避免並發競賽條件kubectl_diff_apply.sh生成一個kubectl diff並提示應用kustomize_diff_apply.sh運行kustomize build,對任何名稱空間進行了重新處理,顯示了提出的更改的kubectl差異,並提示應用提示kustomize_diff_branch.sh在當前或所有給定目錄的當前和目標基礎分支上運行kustomize構建,然後顯示每個目錄的差異。在重構時可檢測差異,例如切換到標記的基地kubectl_create_namespaces.sh在yaml文件或stdin中創建任何名稱空間,這是空白安裝上差異的先決條件,由相鄰腳本用於安全性kubernetes_check_objects_namespaced.sh檢查kubernetes yaml(s)是否明確命名的對象,這很容易導致對錯誤的名稱空間的部署。從您當前的kubernetes群集中讀取API資源,如果成功,則不包括整個群集的對象kustomize_check_objects_namespaced.sh檢查kustomize build build yaml輸出未明確命名的對象(在腳本上方使用)kubectl_deployment_pods.sh通過查詢部署的選擇標籤,然後查詢與這些標籤匹配的POD,然後查詢pods,獲取帶有無法預測的後綴的POD名稱kubectl_get_all.sh查找所有名稱的kubernetes對象,並請求當前或給定的名稱空間。有用,因為kubectl get all錯過了對像類型的LOFkubectl_get_annotation.sh找到具有給定註釋的對像類型kubectl_restart.sh在當前或給定的名稱空間中重新啟動全部或過濾的部署/狀態操作。在調試或清除應用程序問題時有用kubectl_logs.sh尾巴在當前或給定的名稱空間中的所有POD或過濾吊艙中的所有容器。在實時測試中調試一組豆莢時有用kubectl_kv_to_secret.sh如args或通過stdin創建一個kuberbetes秘密,從key=value = value或shell導出格式(例如,從aws_csv_creds.sh中管道中的管道)kubectl_secret_values.sh在給定的kubernetes秘密中打印鍵和base64解碼值,以快速調試kubernetes秘密。另請參閱: gcp_secrets_to_kubernetes.shkubectl_secrets_download.sh在當前以當前或給定名稱空間下載到同名本地文件中,在遷移到密封的秘密之前可用作備份kubernetes_secrets_compare_gcp_secret_manager.sh將每個kubernetes秘密與GCP Secret Manager中的相應秘密進行比較。對於安全檢查GCP秘密管理器值在使外部秘密替換之前對齊kubernetes_secret_to_external_secret.sh從現有的kubernetes秘密生成外部秘密kubernetes_secrets_to_external_secrets.sh - generates External Secrets from all existing Kubernetes secrets found in the current or given namespacekubernetes_secret_to_sealed_secret.sh - generates a Bitnami Sealed Secret from an existing Kubernetes secretkubernetes_secrets_to_sealed_secrets.sh - generates Bitnami Sealed Secrets from all existing Kubernetes secrets found in the current or given namespacekubectl_secrets_annotate_to_be_sealed.sh - annotates secrets in current or given namespace to allow being overwritten by Sealed Secrets (useful to sync ArgoCD health)kubectl_secrets_not_sealed.sh - finds secrets with no SealedSecret ownerReferenceskubectl_secrets_to_be_sealed.sh - finds secrets pending overwrite by Sealed Secrets with the managed annotationkubernetes_foreach_context.sh - executes a command across all kubectl contexts, replacing {context} in each iteration (skips lab contexts docker / minikube / minishift to avoid hangs since they're often offline)kubernetes_foreach_namespace.sh - executes a command across all kubernetes namespaces in the current cluster context, replacing {namespace} in each iterationkubernetes_foreach_context.sh and useful when combined with gcp_secrets_to_kubernetes.sh to load all secrets from GCP to Kubernetes for the current cluster, or combined with gke_kube_creds.sh and kubernetes_foreach_context.sh for all clusters!kubernetes_api.sh - finds Kubernetes API and runs your curl arguments against it, auto-getting authorization token and auto-populating OAuth authentication headerkubernetes_autoscaler_release.sh - finds the latest Kubernetes Autoscaler release that matches your local Kubernetes cluster version using kubectl and the GitHub API. Useful for quickly finding the image override version for eks-cluster-autoscaler-kustomization.yaml in the Kubernetes configs repokubernetes_etcd_backup.sh - creates a timestamped backup of the Kubernetes Etcd database for a kubeadm clusterkubernetes_delete_stuck_namespace.sh - to forcibly delete those pesky kubernetes namespaces of 3rd party apps like Knative that get stuck and hang indefinitely on the finalizers during deletionkubeadm_join_cmd.sh - outputs kubeadm join command (generates new token) to join an existing Kubernetes cluster (used in vagrant kubernetes provisioning scripts)kubeadm_join_cmd2.sh - outputs kubeadm join command manually (calculates cert hash + generates new token) to join an existing Kubernetes clusterkubernetes_nodes_ssh_dump_logs.sh - fetch logs from Kubernetes nodes (eg. for support debug requests by vendors)kubectl_exec.sh - finds and execs to the first Kubernetes pod matching the given name regex, optionally specifying the container name regex to exec to, and shows the full generated kubectl exec command line for claritykubectl_exec2.sh - finds and execs to the first Kubernetes pod matching given pod filters, optionally specifying the container to exec to, and shows the full generated kubectl exec command line for claritykubectl_pods_per_node.sh - lists number of pods per node sorted descendingkubectl_pods_important.sh - lists important pods and their nodes to check on schedulingkubectl_pods_colocated.sh - lists pods from deployments/statefulsets that are colocated on the same nodekubectl_node_labels.sh - lists nodes and their labels, one per line, easier to read visually or pipe in scriptingkubectl_pods_running_with_labels.sh - lists running pods with labels matching key=value pair argumentskubectl_node_taints.sh - lists nodes and their taintskubectl_jobs_stuck.sh - finds Kubernetes jobs stuck for hours or days with no completionskubectl_jobs_delete_stuck.sh - prompts for confirmation to delete stuck Kubernetes jobs found by script abovekubectl_images.sh - lists Kubernetes container images running on the current clusterkubectl_image_counts.sh - lists Kubernetes container images running counts sorted descendingkubectl_image_deployments.sh - lists which deployments, statefulsets or daemonsets container images belong to. Useful to find which deployment, statefulset or daemonset to upgrade to replace a container image eg. when replacing deprecated the k8s.gcr.io registry with registry.k8s.iokubectl_pod_count.sh - lists Kubernetes pods total running countkubectl_pod_labels.sh - lists Kubernetes pods and their labels, one label per line for easier shell script piping for further actionskubectl_pod_ips.sh - lists Kubernetes pods and their pod IP addresseskubectl_container_count.sh - lists Kubernetes containers total running countkubectl_container_counts.sh - lists Kubernetes containers running counts by name sorted descendingkubectl_pods_dump_*.sh - dump stats / logs / jstacks from all pods matching a given regex and namespace to txt files for support debuggingkubectl_pods_dump_stats.sh - dump statskubectl_pods_dump_logs.sh - dump logskubectl_pods_dump_jstacks.sh - dump Java jstackskubectl_pods_dump_all.sh - calls the above kubectl_pods_dump_*.sh scripts for N iterations with a given intervalkubectl_empty_namespaces.sh - finds namespaces without any of the usual objects using kubectl get allkubectl_delete_empty_namespaces.sh - removes empty namespaces, uses kubectl_empty_namespaces.shkubectl_<image>.sh - quick launch one-off pods for interactive debuggging in Kuberneteskubectl_alpine.shkubectl_busybox.shkubectl_curl.shkubectl_dnsutils.shkubectl_gcloud_sdk.shkubectl_run_sa.sh - launch a quick pod with the given service account to test private repo pull & other permissionskubectl_port_forward.sh - launches kubectl port-forward to a given pod's port with an optional label or name filter. If more than one pod is found, prompts with an interactive dialogue to choose one. Optionally automatically opens the forwarded localhost URL in the default browserkubectl_port_forward_spark.sh - does the above for Spark UIhelm_template.sh - templates a Helm chart for Kustomize deploymentskustomize_parse_helm_charts.sh - parses the Helm charts from one or more kustomization.yaml files into TSV format for further shell pipe processingkustomize_install_helm_charts.sh - installs the Helm charts from one or more kustomization.yaml files the old fashioned Helm CLI way so that tools like Nova can be used to detect outdated charts (used in Kubernetes-configs repo's CI)kustomize_update_helm_chart_versions.sh - updates one or more kustomization.yaml files to the latest versions of any charts they containkustomize_materialize.sh - recursively materializes all kustomization.yaml to kustomization.materialized.yaml in the same directories for scanning with tools like Pluto to detect deprecated API objects inherited from embedded Helm charts. Parallelized for performanceargocd_auto_sync.sh - toggle Auto-sync on/off to allow repairs and maintenance operation for a given app and also disables / re-enables the App-of-Apps base apps to stop then re-enabling the appargocd_apps_sync.sh - sync's all ArgoCD apps matching an optional ERE regex filter on their names using the ArgoCD CLIargocd_apps_wait_sync.sh - sync's all ArgoCD apps matching an optional ERE regex filter on their names using the ArgoCD CLI's while also checking their health and operationargocd_generate_resource_whitelist.sh - generates a yaml cluster and namespace resource whitelist for ArgoCD project config. If given an existing yaml, will merge in its original whitelists, dedupe, and write them back into the file using an in-place edit. Useful because ArgoCD 2.2+ doesn't show resources that aren't explicitly allowed, such as ReplicaSets and Podspluto_detect_helm_materialize.sh - recursively materializes all helm Chart.yaml and runs Pluto on each directory to work around this issuepluto_detect_kustomize_materialize.sh - recursively materializes all kustomization.yaml and runs Pluto on each directory to work around this issuepluto_detect_kubectl_dump_objects.sh - dumps all live Kubernetes objects to /tmp all can run Pluto to detect deprecated API objects on the cluster from any sourcerancher_api.sh - queries the Rancher API with authenticationrancher_kube_creds.sh - downloads all Rancher clusters credentials into subdirectories matching cluster names, with .envrc in each, so a quick cd into one and your kubectl is ready to rockSee also Knowledge Base notes for Kubernetes.
docker/ directory:
docker_*.sh / dockerhub_*.sh - Docker / DockerHub API scripts:dockerhub_api.sh - queries DockerHub API v2 with or without authentication ( $DOCKERHUB_USER & $DOCKERHUB_PASSWORD / $DOCKERHUB_TOKEN )docker_api.sh - queries a Docker Registry with optional basic authentication if $DOCKER_USER & $DOCKER_PASSWORD are setdocker_build_hashref.sh - runs docker build and auto-generates docker image name and tag from relative Git path and commit short SHA hashref and a dirty sha suffix if git contents are modified. Useful to compare docker image sizes between your clean and modified versions of Dockerfile or contentsdocker_package_check.sh - runs package installs on major versions of a docker image to check given packages are available before adding them and breaking builds across linux distro versionsdocker_registry_list_images.sh - lists images in a given private Docker Registrydocker_registry_list_tags.sh - lists tags for a given image in a private Docker Registrydocker_registry_get_image_manifest.sh - gets a given image:tag manifest from a private Docker Registrydocker_registry_tag_image.sh - tags a given image with a new tag in a private Docker Registry via the API without pulling and pushing the image data (must faster and more efficient)dockerhub_list_tags.sh - lists tags for a given DockerHub repo. See also dockerhub_show_tags.py in the DevOps Python tools repo.dockerhub_list_tags_by_last_updated.sh - lists tags for a given DockerHub repo sorted by last updated timestamp descendingdockerhub_search.sh - searches with a configurable number of returned items (older docker cli was limited to 25 results)clean_caches.sh - cleans out OS package and programming language caches, call near end of Dockerfile to reduce Docker image sizequay_api.sh - queries the Quay.io API with OAuth2 authentication token $QUAY_TOKENSee also Knowledge Base notes for Docker.
data/ directory:
avro_tools.sh - runs Avro Tools jar, downloading it if not already present (determines latest version when downloading)
parquet_tools.sh - runs Parquet Tools jar, downloading it if not already present (determines latest version when downloading)
csv_header_indices.sh - list CSV headers with their zero indexed numbers, useful reference when coding against column positions
ini_config_add_if_missing.sh - reads INI config blocks from stdin and appends them to the specified file if the section is not found. Used by aws_profile_config_add_if_missing.sh
ini_config_duplicate_sections.sh - lists duplicate INI config sections that are using the same value for a given key in the given .ini file
ini_config_duplicate_section_names.sh - lists duplicate INI config section names that are using the same value for a given key in the given .ini file
ini_grep_section.sh - prints the named section from a given .ini file to stdout
wordcount.sh - counts and ranks words by their frequency in file(s) or stdin
Data format validation validate_*.py from DevOps Python Tools repo:
json2yaml.sh - converts JSON to YAML
yaml2json.sh - converts YAML to JSON - needed for some APIs like GitLab CI linting (see Gitlab section above)
bigdata/ and kafka/ directories:
kafka_*.sh - scripts to make Kafka CLI usage easier including auto-setting Kerberos to source TGT from environment and auto-populating broker and zookeeper addresses. These are auto-added to the $PATH when .bashrc is sourced. For something similar for Solr, see solr_cli.pl in the DevOps Perl Tools repo.zookeeper*.sh - Apache ZooKeeper scripts:zookeeper_client.sh - shortens zookeeper-client command by auto-populating the zookeeper quorum from the environment variable $ZOOKEEPERS or else parsing the zookeeper quorum from /etc/**/*-site.xml to make it faster and easier to connectzookeeper_shell.sh - shortens Kafka's zookeeper-shell command by auto-populating the zookeeper quorum from the environment variable $KAFKA_ZOOKEEPERS and optionally $KAFKA_ZOOKEEPER_ROOT to make it faster and easier to connecthive_*.sh / beeline*.sh - Apache Hive scripts:beeline.sh - shortens beeline command to connect to HiveServer2 by auto-populating Kerberos and SSL settings, zookeepers for HiveServer2 HA discovery if the environment variable $HIVE_HA is set or using the $HIVESERVER_HOST environment variable so you can connect with no arguments (prompts for HiveServer2 address if you haven't set $HIVESERVER_HOST or $HIVE_HA )beeline_zk.sh - same as above for HiveServer2 HA by auto-populating SSL and ZooKeeper service discovery settings (specify $HIVE_ZOOKEEPERS environment variable to override). Automatically called by beeline.sh if either $HIVE_ZOOKEEPERS or $HIVE_HA is set (the latter parses hive-site.xml for the ZooKeeper addresses)hive_foreach_table.sh - executes a SQL query against every table, replacing {db} and {table} in each iteration eg. select count(*) from {table}hive_list_databases.sh - list Hive databases, one per line, suitable for scripting pipelineshive_list_tables.sh - list Hive tables, one per line, suitable for scripting pipelineshive_tables_metadata.sh - lists a given DDL metadata field for each Hive table (to compare tables)hive_tables_location.sh - lists the data location per Hive table (eg. compare external table locations)hive_tables_row_counts.sh - lists the row count per Hive tablehive_tables_column_counts.sh - lists the column count per Hive tableimpala*.sh - Apache Impala scripts:impala_shell.sh - shortens impala-shell command to connect to Impala by parsing the Hadoop topology map and selecting a random datanode to connect to its Impalad, acting as a cheap CLI load balancer. For a real load balancer see HAProxy config for Impala (and many other Big Data & NoSQL technologies). Optional environment variables $IMPALA_HOST (eg. point to an explicit node or an HAProxy load balancer) and IMPALA_SSL=1 (or use regular impala-shell --ssl argument pass through)impala_foreach_table.sh - executes a SQL query against every table, replacing {db} and {table} in each iteration eg. select count(*) from {table}impala_list_databases.sh - list Impala databases, one per line, suitable for scripting pipelinesimpala_list_tables.sh - list Impala tables, one per line, suitable for scripting pipelinesimpala_tables_metadata.sh - lists a given DDL metadata field for each Impala table (to compare tables)impala_tables_location.sh - lists the data location per Impala table (eg. compare external table locations)impala_tables_row_counts.sh - lists the row count per Impala tableimpala_tables_column_counts.sh - lists the column count per Impala tablehdfs_*.sh - Hadoop HDFS scripts:hdfs_checksum*.sh - walks an HDFS directory tree and outputs HDFS native checksums (faster) or portable externally comparable CRC32, in serial or in parallel to save timehdfs_find_replication_factor_1.sh / hdfs_set_replication_factor_3.sh - finds HDFS files with replication factor 1 / sets HDFS files with replication factor <=2 to replication factor 3 to repair replication safety and avoid no replica alarms during maintenance operations (see also Python API version in the DevOps Python Tools repo)hdfs_file_size.sh / hdfs_file_size_including_replicas.sh - quickly differentiate HDFS files raw size vs total replicated sizehadoop_random_node.sh - picks a random Hadoop cluster worker node, like a cheap CLI load balancer, useful in scripts when you want to connect to any worker etc. See also the read HAProxy Load Balancer configurations which focuses on master nodescloudera_*.sh - Cloudera scripts:cloudera_manager_api.sh - script to simplify querying Cloudera Manager API using environment variables, prompts, authentication and sensible defaults. Built on top of curl_auth.shcloudera_manager_impala_queries*.sh - queries Cloudera Manager for recent Impala queries, failed queries, exceptions, DDL statements, metadata stale errors, metadata refresh calls etc. Built on top of cloudera_manager_api.shcloudera_manager_yarn_apps.sh - queries Cloudera Manager for recent Yarn apps. Built on top of cloudera_manager_api.shcloudera_navigator_api.sh - script to simplify querying Cloudera Navigator API using environment variables, prompts, authentication and sensible defaults. Built on top of curl_auth.shcloudera_navigator_audit_logs.sh - fetches Cloudera Navigator audit logs for given service eg. hive/impala/hdfs via the API, simplifying date handling, authentication and common settings. Built on top of cloudera_navigator_api.shcloudera_navigator_audit_logs_download.sh - downloads Cloudera Navigator audit logs for each service by year. Skips existing logs, deletes partially downloaded logs on failure, generally retry safe (while true, Control-C, not kill -9 obviously). Built on top of cloudera_navigator_audit_logs.shSee also Knowledge Base notes for Hadoop.
git/ , github/ , gitlab/ , bitbucket/ and azure_devops/ directories:
git/*.sh - Git scripts:precommit_run_changed_files.sh - runs pre-commit on all files changed on the current branch vs the default branch. Useful to reproduce pre-commit checks that are failing in pull requests to get your PRs to passgit_diff_commit.sh - quickly commits added or updated files to Git, showing a diff and easy enter prompt for each file. Super convenient for fast commits on the command line, and in vim and IDEs via hotkeysgit_review_push.sh - shows diff of what would be pushed upstream and prompts to push. Convenient for fast reviewed pushes via vim or IDEs hotkeysgit_branch_delete_squash_merged.sh - carefully detects if a squash merged branch you want to delete has no changes with the default trunk branch before deleting it. See Squash Merges in knowledge-base about why this is necessary.git_tag_release.sh - creates a Git tag, auto-incrementing a .N suffix on the year/month/day date format if no exact version givengit_foreach_branch.sh - executes a command on all branches (useful in heavily version branched repos like in my Dockerfiles repo)git_foreach_repo.sh - executes a command against all adjacent repos from a given repolist (used heavily by many adjacent scripts)git_foreach_modified.sh - executes a command against each file with git modified statusgit_foreach_repo_replace_readme_actions.sh - updates the README.md badges for GitHub Actions to match the local repo name. Useful to bulk fix copied badges quickly and easilygit_foreach_repo_update_readme.sh - git-diff-commits the README.md for each Git repo checkout using adjacent git_foreach_repo.sh and git_diff_commit.sh scripts. Useful to quickly bulk update README.md in all your projects, such as when references need updatinggit_merge_all.sh / git_merge_master.sh / git_merge_master_pull.sh - merges updates from master branch to all other branches to avoid drift on longer lived feature branches / version branches (eg. Dockerfiles repo)git_remotes_add_origin_providers.sh - auto-creates remotes for the 4 major public repositories (GitHub/GitLab/Bitbucket/Azure DevOps), useful for git pull -all to fetch and merge updates from all providers in one commandgit_remotes_set_multi_origin.sh - sets up multi-remote origin for unified push to automatically keep the 4 major public repositories in sync (especially useful for Bitbucket and Azure DevOps which don't have GitLab's auto-mirroring from GitHub feature)git_remotes_set_https_to_ssh.sh - converts local repo's remote URLs from https to ssh (more convenient with SSH keys instead of https auth tokens, especially since Azure DevOps expires personal access tokens every year)git_remotes_set_ssh_to_https.sh - converts local repo's remote URLs from ssh to https (to get through corporate firewalls or hotels if you travel a lot)git_remotes_set_https_creds_helpers.sh - adds Git credential helpers configuration to the local git repo to use http API tokens dynamically from environment variables if they're setgit_repos_pull.sh - pull multiple repos based on a source file mapping list - useful for easily sync'ing lots of Git repos among computersgit_repos_update.sh - same as above but also runs the make update build to install the latest dependencies, leverages the above scriptgit_grep_env_vars.sh - find environment variables in the current git repo's code base in the format SOME_VAR (useful to find undocumented environment variables in internal or open source projects such as ArgoCD eg. argoproj/argocd-cd #8680)git_log_empty_commits.sh - find empty commits in git history (eg. if a git filter-branch was run but --prune-empty was forgotten, leaking metadata like subjects containing file names or other sensitive info)git_graph_commit_history_gnuplot.sh - generates GNUplot graphs of Git commits per year and per month for the entire history of the local Git repo checkoutgit_graph_commit_history_mermaidjs.sh - generates MermaidJS graphs of Git commits per year and per month for the entire history of the local Git repo checkoutgit_graph_commit_times_gnuplot.sh - generates a GNUplot graph of Git commit times from the current Git repo checkout's git loggit_graph_commit_times_mermaidjs.sh - generates a MermaidJS graph of Git commit times from the current Git repo checkout's git loggit_graph_commit_times_gnuplot_all_repos.sh - generates GNUplot graph of the GitHub commit times from all local adjacent Git repo checkouts listed in setup/repos.txt using Git log in each checkoutgit_graph_commit_times_mermaidjs_all_repos.sh - generates MermaidJS graph of the GitHub commit times from all local adjacent Git repo checkouts listed in setup/repos.txt using Git log in each checkoutgit_revert_line.sh - reverts the first line that matches a given regex from the Git head commit's version of the same line number. Useful to revert some changes caused by over zealous sed'ing scripts, where you want to cherry-pick revert a single line changegit_files_in_history.sh - finds all filename / file paths in the git log history, useful for prepping for git filter-branchgit_filter_branch_fix_author.sh - rewrites Git history to replace author/committer name & email references (useful to replace default account commits). Powerful, read --help and man git-filter-branch carefully. Should only be used by Git Expertsgit_filter_repo_replace_text.sh - rewrites Git history to replace a given text to scrub a credential or other sensitive token from history. Refuses to operate on tokens less than 8 chars for safetygit_submodules_update_repos.sh - updates submodules (pulls and commits latest upstream github repo submodules) - used to cascade submodule updates throughout all my reposgit_askpass.sh - credential helper script to use environment variables for git authenticationmarkdown_generate_index.sh - generates a markdown index list from the headings in a given markdown file such as README.mdmarkdown_replace_index.sh - replaces a markdown index section in a given markdown file using markdown_generate_index.shgithub/*.sh - GitHub API / CLI scripts:github_api.sh - queries the GitHub API. Can infer GitHub user, repo and authentication token from local checkout or environment ( $GITHUB_USER , $GITHUB_TOKEN )github_install_binary.sh - installs a binary from GitHub releases into $HOME/bin or /usr/local/bin. Auto-determines the latest release if no version specified, detects and unpacks any tarball or zip filesgithub_foreach_repo.sh - executes a templated command for each non-fork GitHub repo, replacing the {owner} / {name} or {repo} placeholders in each iterationgithub_graph_commit_times_gnuplot.sh - generates GNUplot graph of GitHub commit times from all public GitHub repos for a given user. Fetches the commit data via the GitHub APIgithub_graph_commit_times_mermaidjs.sh - generates MermaidJS graph of the GitHub commit times from all public GitHub repos for a given user. Fetches the commit data via the GitHub APIgithub_clone_or_pull_all_repos.sh - git clones or pulls all repos for a user or organization into directories of the same name under the current directorygithub_download_release_file.sh - downloads a file from GitHub Releases, optionally determining the latest version, uses bin/download_url_file.shgithub_download_release_jar.sh - downloads a JAR file from GitHub Releases (used by install/download_*_jar.sh for things like JDBC drivers or Java decompilers), optionally determines latest version to download, and finally validates the downloaded file's formatgithub_invitations.sh - lists / accepts repo invitations. Useful to accept a large number of invites to repos generated by automationgithub_mirror_repos_to_gitlab.sh - creates/syncs GitHub repos to GitLab for migrations or to cron fast free Disaster Recovery, including all branches and tags, plus the repo descriptions. Note this doesn't include PRs/wikis/releasesgithub_mirror_repos_to_bitbucket.sh - creates/syncs GitHub repos to BitBucket for migrations or to cron fast free Disaster Recovery, including all branches and tags, plus the repo descriptions. Note this doesn't include PRs/wikis/releasesgithub_mirror_repos_to_aws_codecommit.sh - creates/syncs GitHub repos to AWS CodeCommit for migrations or to cron fast almost free Disaster Recovery (close to $0 compared to $100-400+ per month for Rewind BackHub), including all branches and tags, plus the repo descriptions. Note this doesn't include PRs/wikis/releasesgithub_mirror_repos_to_gcp_source_repos.sh - creates/syncs GitHub repos to GCP Source Repos for migrations or to cron fast almost free Disaster Recovery (close to $0 compared to $100-400+ per month for Rewind BackHub), including all branches and tags. Note this doesn't include repo description/PRs/wikis/releasesgithub_pull_request_create.sh - creates a Pull Request idempotently by first checking for an existing PR between the branches, and also checking if there are the necessary commits between the branches, to avoid common errors from blindly raising PRs. Useful to automate code promotion across environment branches. Also works across repo forks and is used by github_repo_fork_update.sh . Even populates github pull request template and does Jira ticket number replacement from branch prefixgithub_pull_request_preview.sh - opens a GitHub Pull Request preview page from the current local branch to the given or default branchgithub_push_pr_preview.sh - pushes to GitHub origin, sets upstream branch, then open a Pull Request preview from current branch to the given or default trunk branch in your browsergithub_push_pr.sh - pushes to GitHub origin, sets upstream branch, then idemopotently creates a Pull Request from current branch to the given or default trunk branch and opens the generated PR in your browser for reviewgithub_merge_branch.sh - merges one branch into another branch via a Pull Request for full audit tracking all changes. Useful to automate feature PRs, code promotion across environment branches, or backport hotfixes from Production or Staging to trunk branches such as master, main, dev or developgithub_remote_set_upstream.sh - in a forked GitHub repo's checkout, determine the origin of the fork using GitHub CLI and configure a git remote to the upstream. Useful to be able to easily pull updates from the original source repogithub_pull_merge_trunk.sh - pulls the origin or fork upstream repo's trunk branch and merges it into the local branch, In a forked GitHub repo's checkout, determines the origin of the fork using GitHub CLI, configures a git remote to the upstream, pulls the default branch and if on a branch other than the default then merges the default branch to the local current branch. Simplifies and automates keeping your checkout or forked repo up to date with the original source repo to quickly resolve merge conflicts locally and submit updated Pull Requestsgithub_forked_add_remote.sh - quickly adds a forked repo as a remote from an interactive men list of forked reposgithub_forked_checkout_branch.sh - quickly check out a forked repo's branch from an interactive menu lists of forked repos and their branchesgithub_tag_hashref.sh - Returns the GitHub commit hashref for a given GitHub Actions owner/repo@tag or https://github.com/owner/repo@tag . Useful for pinning 3rd party GitHub Actions to hashref instead of tag to follow GitHub Actions Best Practicesgithub_actions_foreach_workflow.sh - executes a templated command for each workflow in a given GitHub repo, replacing {name} , {id} and {state} in each iterationgithub_actions_aws_create_load_credential.sh - creates an AWS user with group/policy, generates and downloads access keys, and uploads them to the given repogithub_actions_in_use.sh - lists GitHub Actions directly referenced in the .github/workflows in the current local repo checkoutgithub_actions_in_use_repo.sh - lists GitHub Actions for a given repo via the API, including following imported reusable workflowsgithub_actions_in_use_across_repos.sh - lists GitHub Actions in use across all your reposgithub_actions_repos_lockdown.sh - secures GitHub Actions settings across all user repos to only GitHub, verified partners and selected 3rd party actionsgithub_actions_repo_set_secret.sh - sets a secret in the given repo from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )github_actions_repo_env_set_secret.sh - sets a secret in the given repo and environment from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )github_actions_repo_secrets_overriding_org.sh - finds any secrets for a repo that are overriding organization level secrets. Useful to combine with github_foreach_repo.sh for auditinggithub_actions_repo_restrict_actions.sh - restricts GitHub Actions in the given repo to only running actions from GitHub and verfied partner companies (.eg AWS, Docker)github_actions_repo_actions_allow.sh - allows select 3rd party GitHub Actions in the given repogithub_actions_runner.sh - generates a GitHub Actions self-hosted runner token for a given Repo or Organization via the GitHub API and then runs a dockerized GitHub Actions runner with the appropriate configurationgithub_actions_runner_local.sh - downloads, configures and runs a local GitHub Actions Runner for Linux or Macgithub_actions_runner_token.sh - generates a GitHub Actions runner token to register a new self-hosted runnergithub_actions_runners.sh - lists GitHub Actions self-hosted runners for a given Repo or Organizationgithub_actions_delete_offline_runners.sh - deletes offline GitHub Actions self-hosted runners. Useful to clean up short-lived runners eg. Docker, Kubernetesgithub_actions_workflows.sh - lists GitHub Actions workflows for a given repo (or auto-infers local repository)github_actions_workflow_runs.sh - lists GitHub Actions workflow runs for a given workflow id or namegithub_actions_workflows_status.sh - lists all GitHub Actions workflows and their statuses for a given repogithub_actions_workflows_state.sh - lists GitHub Actions workflows enabled/disabled states (GitHub now disables workflows after 6 months without a commit)github_actions_workflows_disabled.sh - lists GitHub Actions workflows that are disabled. Combine with github_foreach_repo.sh to scan all repos to find disabled workflowsgithub_actions_workflow_enable.sh - enables a given GitHub Actions workflowgithub_actions_workflows_enable_all.sh - enables all GitHub Actions workflows in a given repo. Useful to undo GitHub disabling all workflows in a repo after 6 months without a commitgithub_actions_workflows_trigger_all.sh - triggers all workflows for the given repogithub_actions_workflows_cancel_all_runs.sh - cancels all workflow runs for the given repogithub_actions_workflows_cancel_waiting_runs.sh - cancels workflow runs that are in waiting state, eg. waiting for old deployment approvalsgithub_ssh_get_user_public_keys.sh - fetches a given GitHub user's public SSH keys via the API for piping to ~/.ssh/authorized_keys or adjacent toolsgithub_ssh_get_public_keys.sh - fetches the currently authenticated GitHub user's public SSH keys via the API, similar to above but authenticated to get identifying key commentsgithub_ssh_add_public_keys.sh - uploads SSH keys from local files or standard input to the currently authenticated GitHub account. Specify pubkey files (default: ~/.ssh/id_rsa.pub ) or read from standard input for piping from adjacent toolsgithub_ssh_delete_public_keys.sh - deletes given SSH keys from the currently authenticated GitHub account by key id or title regex matchgithub_gpg_get_user_public_keys.sh - fetches a given GitHub user's public GPG keys via the APIgithub_generate_status_page.sh - generates a STATUS.md page by merging all the README.md headers for all of a user's non-forked GitHub repos or a given list of any repos etc.github_purge_camo_cache.sh - send HTTP Purge requests to all camo urls (badge caches) for the current or given GitHub repo's landing/README.md pagegithub_ip_ranges.sh - returns GitHub's IP ranges, either all by default or for a select given service such as hooks or actionsgithub_sync_repo_descriptions.sh - syncs GitHub repo descriptions to GitLab & BitBucket reposgithub_release.sh - creates a GitHub Release, auto-incrementing a .N suffix on the year/month/day date format if no exact version givengithub_repo_check_pat_token.sh - checks the given PAT token can access the given GitHub repo. Useful to test a PAT token used for integrations like ArgoCDgithub_repo_description.sh - fetches the given repo's description (used by github_sync_repo_descriptions.sh )github_repo_find_files.sh - finds files matching a regex in the current or given GitHub repo via the GitHub APIgithub_repo_latest_release.sh - returns the latest release tag for a given GitHub repo via the GitHub APIgithub_repo_latest_release_filter.sh - returns the latest release tag matching a given regex filter for a given GitHub repo via the GitHub API. Useful for getting the latest version of things like Kustomize which has other releases for kyamlgithub_repo_stars.sh - fetches the stars, forks and watcher counts for a given repogithub_repo_teams.sh - fetches the GitHub Enterprise teams and their role permisions for a given repo. Combine with github_foreach_repo.sh to audit your all your personal or GitHub organization's reposgithub_repo_collaborators.sh - fetches a repo's granted users and outside invited collaborators as well as their role permisions for a given repo. Combine with github_foreach_repo.sh to audit your all your personal or GitHub organization's reposgithub_repo_protect_branches.sh - enables branch protections on the given repo. Can specify one or more branches to protect, otherwise finds and applies to any of master , main , develop , dev , staging , productiongithub_repos_find_files.sh - finds files matching a regex across all repos in the current GitHub organization or user accountgithub_repo_fork_sync.sh - sync's current or given fork, then runs github_repo_fork_update.sh to cascade changes to major branches via Pull Requests for auditabilitygithub_repo_fork_update.sh - updates a forked repo by creating pull requests for full audit tracking and auto-merges PRs for non-production branchesgithub_repos_public.sh - lists public repos for a user or organization. Useful to periodically scan and account for any public reposgithub_repos_disable_wiki.sh - disables the Wiki on one or more given repos to prevent documentation fragmentation and make people use the centralized documentation tool eg. Confluence or Slitegithub_repos_with_few_users.sh - finds repos with few or no users (default: 1), which in Enterprises is a sign that a user has created a repo without assigning team privilegesgithub_repos_with_few_teams.sh - finds repos with few or no teams (default: 0), which in Enterprises is a sign that a user has created a repo without assigning team privilegesgithub_repos_without_branch_protections.sh - finds repos without any branch protection rules (use github_repo_protect_branches.sh on such repos)github_repos_not_in_terraform.sh - finds all non-fork repos for current or given user/organization which are not found in $PWD/*.tf Terraform codegithub_teams_not_in_terraform.sh - finds all teams for given organization which are not found in $PWD/*.tf Terraform codegithub_repos_sync_status.sh - determines whether each GitHub repo's mirrors on GitLab / BitBucket / Azure DevOps are up to date with the latest commits, by querying all 3 APIs and comparing master branch hashrefsgithub_teams_not_idp_synced.sh - finds GitHub teams that aren't sync'd from an IdP like Azure AD. These should usually be migrated or removedgithub_user_repos_stars.sh - fetches the total number of stars for all original source public repos for a given usergithub_user_repos_forks.sh - fetches the total number of forks for all original source public repos for a given usergithub_user_repos_count.sh - fetches the total number of original source public repos for a given usernamegithub_user_followers.sh - fetches the number of followers for a given usernamegithub_url_clipboard.sh - copies a GitHub URL file's contents to the clipboard, converting the URL to a raw GitHub content URL where necessarygitlab/*.sh - GitLab API scripts:gitlab_api.sh - queries the GitLab API. Can infer GitLab user, repo and authentication token from local checkout or environment ( $GITLAB_USER , $GITLAB_TOKEN )gitlab_install_binary.sh - installs a binary from GitLab releases into $HOME/bin or /usr/local/bin. Auto-determines the latest release if no version specified, detects and unpacks any tarball or zip filesgitlab_push_mr_preview.sh - pushes to GitLab origin, sets upstream branch, then open a Merge Request preview from current to default branchgithub_push_mr.sh - pushes to GitLab origin, sets upstream branch, then idemopotently creates a Merge Request from current branch to the given or default trunk branch and opens the generated MR in your browser for reviewgitlab_foreach_repo.sh - executes a templated command for each GitLab project/repo, replacing the {user} and {project} in each iterationgitlab_project_latest_release.sh - returns the latest release tag for a given GitLab project (repo) via the GitLab APIgitlab_project_set_description.sh - sets the description for one or more projects using the GitLab APIgitlab_project_set_env_vars.sh - adds / updates GitLab project-level environment variable(s) via the API from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )gitlab_group_set_env_vars.sh - adds / updates GitLab group-level environment variable(s) via the API from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )gitlab_project_create_import.sh - creates a GitLab repo as an import from a given URL, and mirrors if on GitLab Premium (can only manually configure for public repos on free tier, API doesn't support configuring even public repos on free)gitlab_project_protect_branches.sh - enables branch protections on the given project. Can specify one or more branches to protect, otherwise finds and applies to any of master , main , develop , dev , staging , productiongitlab_project_mirrors.sh - lists each GitLab repo and whether it is a mirror or notgitlab_pull_mirror.sh - trigger a GitLab pull mirroring for a given project's repo, or auto-infers project name from the local git repogitlab_ssh_get_user_public_keys.sh - fetches a given GitLab user's public SSH keys via the API, with identifying comments, for piping to ~/.ssh/authorized_keys or adjacent toolsgitlab_ssh_get_public_keys.sh - fetches the currently authenticated GitLab user's public SSH keys via the APIgitlab_ssh_add_public_keys.sh - uploads SSH keys from local files or standard input to the currently authenticated GitLab account. Specify pubkey files (default: ~/.ssh/id_rsa.pub ) or read from standard input for piping from adjacent toolsgitlab_ssh_delete_public_keys.sh - deletes given SSH keys from the currently authenticated GitLab account by key id or title regex matchgitlab_validate_ci_yaml.sh - validates a .gitlab-ci.yml file via the GitLab APIbitbucket/*.sh - BitBucket API scripts:bitbucket_api.sh - queries the BitBucket API. Can infer BitBucket user, repo and authentication token from local checkout or environment ( $BITBUCKET_USER , $BITBUCKET_TOKEN )bitbucket_foreach_repo.sh - executes a templated command for each BitBucket repo, replacing the {user} and {repo} in each iterationbitbucket_workspace_set_env_vars.sh - adds / updates Bitbucket workspace-level environment variable(s) via the API from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )bitbucket_repo_set_env_vars.sh - adds / updates Bitbucket repo-level environment variable(s) via the API from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )bitbucket_repo_set_description.sh - sets the description for one or more repos using the BitBucket APIbitbucket_enable_pipelines.sh - enables the CI/CD pipelines for all reposbitbucket_disable_pipelines.sh - disables the CI/CD pipelines for all reposbitbucket_repo_enable_pipeline.sh - enables the CI/CD pipeline for a given repobitbucket_repo_disable_pipeline.sh - disables the CI/CD pipeline for a given repobitbucket_ssh_get_public_keys.sh - fetches the currently authenticated BitBucket user's public SSH keys via the API for piping to ~/.ssh/authorized_keys or adjacent toolsbitbucket_ssh_add_public_keys.sh - uploads SSH keys from local files or standard input to the currently authenticated BitBucket account. Specify pubkey files (default: ~/.ssh/id_rsa.pub ) or read from standard input for piping from adjacent toolsbitbucket_ssh_delete_public_keys.sh - uploads SSH keys from local files or standard input to the currently authenticated BitBucket account. Specify pubkey files (default: ~/.ssh/id_rsa.pub ) or read from standard input for piping from adjacent toolsSee also Knowledge Base notes for Git.
jenkins/ , terraform/ , teamcity/ , buildkite/ , circlci/ , travis/ , azure_devops/ , ..., cicd/ directories:
appveyor_api.sh - queries AppVeyor's API with authenticationazure_devops/*.sh - Azure DevOps scripts:azure_devops_api.sh - queries Azure DevOps's API with authenticationazure_devops_foreach_repo.sh - executes a templated command for each Azure DevOps repo, replacing {user} , {org} , {project} and {repo} in each iterationazure_devops_to_github_migration.sh - migrates one or all Azure DevOps git repos to GitHub, including all branches and sets the default branch to match via the APIs to maintain the same checkout behaviourazure_devops_disable_repos.sh - disables one or more given Azure DevOps repos (to prevent further pushes to them after migration to GitHub)circleci/*.sh - CircleCI scripts:circleci_api.sh - queries CircleCI's API with authenticationcircleci_project_set_env_vars.sh - adds / updates CircleCI project-level environment variable(s) via the API from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )circleci_context_set_env_vars.sh - adds / updates CircleCI context-level environment variable(s) via the API from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )circleci_project_delete_env_vars.sh - deletes CircleCI project-level environment variable(s) via the APIcircleci_context_delete_env_vars.sh - deletes CircleCI context-level environment variable(s) via the APIcircleci_local_execute.sh - installs CircleCI CLI and executes .circleci/config.yml locallycircleci_public_ips.sh - lists CircleCI public IP addresses via dnsjson.comcodeship_api.sh - queries CodeShip's API with authenticationdrone_api.sh - queries Drone.io's API with authenticationshippable_api.sh - queries Shippable's API with authenticationwercker_app_api.sh - queries Wercker's Applications API with authenticationgocd_api.sh - queries GoCD's APIgocd.sh - one-touch GoCD CI:$PWD/setup/gocd_config_repo.json ) from which to source pipeline(s) ( .gocd.yml ).gocd.yml config (all mine have it), mimicking structure of fully managed CI systemsconcourse.sh - one-touch Concourse CI:$PWD/.concourse.yml.concourse.yml config (all mine have it), mimicking structure of fully managed CI systemsfly.sh - shortens Concourse fly command to not have to specify target all the timejenkins/*.sh - Jenkins CI scripts:jenkins.sh - one-touch Jenkins CI:Jenkinsfile$PWD/setup/jenkins-job.xmlJenkinsfileJenkinsfile pipeline and setup/jenkins-job.xml (all mine have it)jenkins_api.sh - queries the Jenkins Rest API, handles authentication, pre-fetches CSFR protection token crumb, supports many environment variables such as $JENKINS_URL for ease of usejenkins_jobs.sh - lists Jenkins jobs (pipelines)jenkins_foreach_job.sh - runs a templated command for each Jenkins jobjenkins_jobs_download_configs.sh - downloads all Jenkins job configs to xml files of the same namejenkins_job_config.sh - gets or sets a Jenkins job's configjenkins_job_description.sh - gets or sets a Jenkins job's descriptionjenkins_job_enable.sh - enables a Jenkins job by namejenkins_job_disable.sh - disables a Jenkins job by namejenkins_job_trigger.sh - triggers a Jenkins job by namejenkins_job_trigger_with_params.sh - triggers a Jenkins job with parameters which can be passed as --data KEY=VALUEjenkins_jobs_enable.sh - enables all Jenkins jobs/pipelines with names matching a given regexjenkins_jobs_disable.sh - disables all Jenkins jobs/pipelines with names matching a given regexjenkins_builds.sh - lists Jenkins latest builds for every jobjenkins_cred_add_cert.sh - creates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_add_kubernetes_sa.sh - creates a Jenkins Kubernetes service account credentialjenkins_cred_add_secret_file.sh - creates a Jenkins secret file credential from a filejenkins_cred_add_secret_text.sh - creates a Jenkins secret string credential from a string or a filejenkins_cred_add_ssh_key.sh - creates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_add_user_pass.sh - creates a Jenkins username/password credentialjenkins_cred_delete.sh - deletes a given Jenkins credential by idjenkins_cred_list.sh - lists Jenkins credentials IDs and Namesjenkins_cred_update_cert.sh - updates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_update_kubernetes_sa.sh - updates a Jenkins Kubernetes service account credentialjenkins_cred_update_secret_file.sh - updates a Jenkins secret file credential from a filejenkins_cred_update_secret_text.sh - updates a Jenkins secret string credential from a string or a filejenkins_cred_update_ssh_key.sh - updates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_update_user_pass.sh - updates a Jenkins username/password credentialjenkins_cred_set_cert.sh - creates or updates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_set_kubernetes_sa.sh - creates or updates a Jenkins Kubernetes service account credentialjenkins_cred_set_secret_file.sh - creates or updates a Jenkins secret file credential from a filejenkins_cred_set_secret_text.sh - creates or updates a Jenkins secret string credential from a string or a filejenkins_cred_set_ssh_key.sh - creates or updates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_set_user_pass.sh - creates or updates a Jenkins username/password credentialjenkins_cli.sh - shortens jenkins-cli.jar command by auto-inferring basic configuations, auto-downloading the CLI if absent, inferrings a bunch of Jenkins related variables like $JENKINS_URL , $JENKINS_CLI_ARGS and authentication using $JENKINS_USER / $JENKINS_PASSWORD , or finds admin password from inside local docker 容器。 Used heavily by jenkins.sh one-shot setup and the following scripts:jenkins_foreach_job_cli.sh - runs a templated command for each Jenkins jobjenkins_create_job_parallel_test_runs.sh - creates a freestyle parameterized test sleep job and launches N parallel runs of it to test scaling and parallelization of Jenkins on Kubernetes agentsjenkins_create_job_check_gcp_serviceaccount.sh - creates a freestyle test job which runs a GCP Metadata query to determine the GCP serviceaccount the agent pod is operating under to check GKE Workload Identity integrationjenkins_jobs_download_configs_cli.sh - downloads all Jenkins job configs to xml files of the same namejenkins_cred_cli_add_cert.sh - creates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_cli_add_kubernetes_sa.sh - creates a Jenkins Kubernetes service account credentialjenkins_cred_cli_add_secret_file.sh - creates a Jenkins secret file credential from a filejenkins_cred_cli_add_secret_text.sh - creates a Jenkins secret string credential from a string or a filejenkins_cred_cli_add_ssh_key.sh - creates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_cli_add_user_pass.sh - creates a Jenkins username/password credentialjenkins_cred_cli_delete.sh - deletes a given Jenkins credential by idjenkins_cred_cli_list.sh - lists Jenkins credentials IDs and Namesjenkins_cred_cli_update_cert.sh - updates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_cli_update_kubernetes_sa.sh - updates a Jenkins Kubernetes service account credentialjenkins_cred_cli_update_secret_file.sh - updates a Jenkins secret file credential from a filejenkins_cred_cli_update_secret_text.sh - updates a Jenkins secret string credential from a string or a filejenkins_cred_cli_update_ssh_key.sh - updates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_cli_update_user_pass.sh - updates a Jenkins username/password credentialjenkins_cred_cli_set_cert.sh - creates or updates a Jenkins certificate credential from a PKCS#12 keystorejenkins_cred_cli_set_kubernetes_sa.sh - creates or updates a Jenkins Kubernetes service account credentialjenkins_cred_cli_set_secret_file.sh - creates or updates a Jenkins secret file credential from a filejenkins_cred_cli_set_secret_text.sh - creates or updates a Jenkins secret string credential from a string or a filejenkins_cred_cli_set_ssh_key.sh - creates or updates a Jenkins SSH key credential from a string or an SSH private key filejenkins_cred_cli_set_user_pass.sh - creates or updates a Jenkins username/password credentialjenkins_password.sh - gets Jenkins admin password from local docker container. Used by jenkins_cli.shjenkins_plugins_latest_versions.sh - finds the latest versions of given Jenkins plugins. Useful to programmatically upgrade your Jenkins on Kubernetes plugins defined in values.yamlcheck_jenkinsfiles.sh - validates all *Jenkinsfile* files in the given directory trees using the online Jenkins validatorteamcity/*.sh - TeamCity CI scripts:teamcity.sh - one-touch TeamCity CI cluster:$PWD has a .teamcity.vcs.json / .teamcity.vcs.ssh.json / .teamcity.vcs.oauth.json and corresponding $TEAMCITY_SSH_KEY or $TEAMCITY_GITHUB_CLIENT_ID + $TEAMCITY_GITHUB_CLIENT_SECRET environment variablesteamcity_api.sh - queries TeamCity's API, auto-handling authentication and other quirks of the APIteamcity_create_project.sh - creates a TeamCity project using the APIteamcity_create_github_oauth_connection.sh - creates a TeamCity GitHub OAuth VCS connection in the Root project, useful for bootstrapping projects from VCS configsteamcity_create_vcs_root.sh - creates a TeamCity VCS root from a save configuration (XML or JSON), as downloaded by teamcity_export_vcs_roots.shteamcity_upload_ssh_key.sh - uploads an SSH private key to a TeamCity project (for use in VCS root connections)teamcity_agents.sh - lists TeamCity agents, their connected state, authorized state, whether enabled and up to dateteamcity_builds.sh - lists the last 100 TeamCity builds along with the their state (eg. finished ) and status (eg. SUCCESS / FAILURE )teamcity_buildtypes.sh - lists TeamCity buildTypes (pipelines) along with the their project and IDsteamcity_buildtype_create.sh - creates a TeamCity buildType from a local JSON configuration (see teamcity_buildtypes_download.sh )teamcity_buildtype_set_description_from_github.sh - sync's a TeamCity buildType's description from its Github repo descriptionteamcity_buildtypes_set_description_from_github.sh - sync's all TeamCity buildType descriptions from their GitHub repos where availableteamcity_export.sh - downloads TeamCity configs to local JSON files in per-project directories mimicking native TeamCity directory structure and file namingteamcity_export_project_config.sh - downloads TeamCity project config to local JSON filesteamcity_export_buildtypes.sh - downloads TeamCity buildType config to local JSON filesteamcity_export_vcs_roots.sh - downloads TeamCity VCS root config to local JSON filesteamcity_projects.sh - lists TeamCity project IDs and Namesteamcity_project_set_versioned_settings.sh - configures a project to track all changes to a VCS (eg. GitHub)teamcity_project_vcs_versioning.sh - quickly toggle VCS versioning on/off for a given TeamCity project (useful for testing without auto-committing)teamcity_vcs_roots.sh - lists TeamCity VCS root IDs and Namestravis/*.sh - Travis CI API scripts (one of my all-time favourite CI systems):travis_api.sh - queries the Travis CI API with authentication using $TRAVIS_TOKENtravis_repos.sh - lists Travis CI repostravis_foreach_repo.sh - executes a templated command against all Travis CI repostravis_repo_build.sh - triggers a build for the given repotravis_repo_caches.sh - lists caches for a given repotravis_repo_crons.sh - lists crons for a given repotravis_repo_env_vars.sh - lists environment variables for a given repotravis_repo_settings.sh - lists settings for a given repotravis_repo_create_cron.sh - creates a cron for a given repo and branchtravis_repo_delete_crons.sh - deletes all crons for a given repotravis_repo_delete_caches.sh - deletes all caches for a given repo (sometimes clears build problems)travis_delete_cron.sh - deletes a Travis CI cron by IDtravis_repos_settings.sh - lists settings for all repostravis_repos_caches.sh - lists caches for all repostravis_repos_crons.sh - lists crons for all repostravis_repos_create_cron.sh - creates a cron for all repostravis_repos_delete_crons.sh - deletes all crons for all repostravis_repos_delete_caches.sh - deletes all caches for all repostravis_lint.sh - lints a given .travis.yml using the APIbuildkite/*.sh - BuildKite API scripts:buildkite_api.sh - queries the BuildKite API, handling authentication using $BUILDKITE_TOKENbuildkite_pipelines.sh - list buildkite pipelines for your $BUILDKITE_ORGANIZATION / $BUILDKITE_USERbuildkite_foreach_pipeline.sh - executes a templated command for each Buildkite pipeline, replacing the {user} and {pipeline} in each iterationbuildkite_agent.sh - runs a buildkite agent locally on Linux or Mac, or in Docker with choice of Linux distrosbuildkite_agents.sh - lists the Buildkite agents connected along with their hostname, IP, started dated and agent detailsbuildkite_pipelines.sh - lists Buildkite pipelinesbuildkite_create_pipeline.sh - create a Buildkite pipeline from a JSON configuration (like from buildkite_get_pipeline.sh or buildkite_save_pipelines.sh )buildkite_get_pipeline.sh - gets details for a specific Buildkite pipeline in JSON formatbuildkite_update_pipeline.sh - updates a BuildKite pipeline from a configuration provided via stdin or from a file saved via buildkite_get_pipeline.shbuildkite_patch_pipeline.sh - updates a BuildKite pipeline from a partial configuration provided as an arg, via stdin, or from a file saved via buildkite_get_pipeline.shbuildkite_pipeline_skip_settings.sh - lists the skip intermediate build settings for one or more given BuildKite pipelinesbuildkite_pipeline_set_skip_settings.sh - configures given or all BuildKite pipelines to skip intermediate builds and cancel running builds in favour of latest buildbuildkite_cancel_scheduled_builds.sh - cancels BuildKite scheduled builds (to clear a backlog due to offline agents and just focus on new builds)buildkite_cancel_running_builds.sh - cancels BuildKite running builds (to clear them and restart new later eg. after agent / environment change / fix)buildkite_pipeline_disable_forked_pull_requests.sh - disables forked pull request builds on a BuildKite pipeline to protect your build environment from arbitrary code execution security vulnerabilitiesbuildkite_pipelines_vulnerable_forked_pull_requests.sh - prints the status of each pipeline, should all return false, otherwise run the above script to close the vulnerabilitybuildkite_rebuild_cancelled_builds.sh - triggers rebuilds of last N cancelled builds in current pipelinebuildkite_rebuild_failed_builds.sh - triggers rebuilds of last N failed builds in current pipeline (eg. after agent restart / environment change / fix)buildkite_rebuild_all_pipelines_last_cancelled.sh - triggers rebuilds of the last cancelled build in each pipeline in the organizationbuildkite_rebuild_all_pipelines_last_failed.sh - triggers rebuilds of the last failed build in each pipeline in the organizationbuildkite_retry_jobs_dead_agents.sh - triggers job retries where jobs failed due to killed agents, continuing builds from that point and replacing their false negative failed status with the real final status, slightly better than rebuilding entire jobs which happen under a new buildbuildkite_recreate_pipeline.sh - recreates a pipeline to wipe out all stats (see url and badge caveats in --help )buildkite_running_builds.sh - lists running builds and the agent they're running onbuildkite_save_pipelines.sh - saves all BuildKite pipelines in your $BUILDKITE_ORGANIZATION to local JSON files in $PWD/.buildkite-pipelines/buildkite_set_pipeline_description.sh - sets the description of one or more pipelines using the BuildKite APIbuildkite_set_pipeline_description_from_github.sh - sets a Buildkite pipeline description to match its source GitHub repobuildkite_sync_pipeline_descriptions_from_github.sh - for all BuildKite pipelines sets each description to match its source GitHub repobuildkite_trigger.sh - triggers BuildKite build job for a given pipelinebuildkite_trigger_all.sh - same as above but for all pipelinesterraform_cloud_*.sh - Terraform Cloud API scripts:terraform_cloud_api.sh - queries the Cloudflare API, handling authentication using $TERRAFORM_TOKENterraform_cloud_ip_ranges.sh - returns the list of IP ranges for Terraform Cloudterraform_cloud_organizations.sh - lists Terraform Cloud organizationsterraform_cloud_workspaces.sh - lists Terraform Cloud workspacesterraform_cloud_workspace_vars.sh - lists Terraform Cloud workspace variablesterraform_cloud_workspace_set_vars.sh - adds / updates Terraform workspace-level sensitive environment/terraform variable(s) via the API from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )terraform_cloud_workspace_delete_vars.sh - deletes one or more Terraform workspace-level variablesterraform_cloud_varsets.sh - lists Terraform Cloud variable setsterraform_cloud_varset_vars.sh - lists Terraform Cloud variables in on or all variables sets for the given organizationterraform_cloud_varset_set_vars.sh - adds / updates Terraform sensitive environment/terraform variable(s) in a given variable set via the API from key=value or shell export format, as args or via stdin (eg. piped from aws_csv_creds.sh )terraform_cloud_varset_delete_vars.sh - deletes one or more Terraform variables in a given variable setterraform_*.sh - Terraform scripts:terraform_gcs_backend_version.sh - determines the Terraform state version from the tfstate file in a GCS bucket found in a local given backend.tfterraform_gitlab_download_backend_variable.sh - downloads backend.tf from a GitLab CI/CD variable to be able to quickly iterate plans locallyterraform_import.sh - finds given resource type in ./*.tf code or Terraform plan output that are not in Terraform state and imports themterraform_import_aws_iam_users.sh - parses Terraform plan output to import new aws_iam_user additions into Terraform stateterraform_import_aws_iam_groups.sh - parses Terraform plan output to import new aws_iam_group additions into Terraform stateterraform_import_aws_iam_policies.sh - parses Terraform plan output to import new aws_iam_policies additions, resolves their ARNs and imports them into Terraform stateterraform_import_aws_sso_permission_sets.sh - finds all aws_ssoadmin_permission_set in ./*.tf code, resolves the ARNs and imports them to Terraform stateterraform_import_aws_sso_account_assignments.sh - parses Terraform plan output to import new aws_ssoadmin_account_assignment additions into Terraform stateterraform_import_aws_sso_managed_policy_attachments.sh - parses Terraform plan output to import new aws_ssoadmin_account_assignment additions into Terraform stateterraform_import_aws_sso_permission_set_inline_policies.sh - parses Terraform plan output to import new aws_ssoadmin_permission_set_inline_policy additions into Terraform stateterraform_import_github_repos.sh - finds all github_repository in ./*.tf code or Terraform plan output that are not in Terraform state and imports them. See also github_repos_not_in_terraform.shterraform_import_github_team.sh - imports a given GitHub team into a given Terraform state resource, by first querying the GitHub API for the team ID needed to import into Terraformterraform_import_github_teams.sh - finds all github_team in ./*.tf code or Terraform plan output that are not in Terraform state, then queries the GitHub API for their IDs and imports them. See also github_teams_not_in_terraform.shterraform_import_github_team_repos.sh - finds all github_team_repository in Terraform plan that would be added, then queries the GitHub API for the repos and team IDs and if they both exist then imports them to Terraform stateterraform_resources.sh - external program to get all resource ids and attribute for a given resource type to work around Terraform splat expression limitation (#19931)terraform_managed_resource_types.sh - quick parse of what Terraform resource types are found in *.tf files under the current or given directory tree. Useful to give you a quick glance of what services you are managingterraform_registry_url_extract.sh - extracts the Terraform Registry URL in either tfr:// or https://registry.terraform.io/ format from a given string, file or standard input. Useful to fast load Terraform Module documentation via editor/IDE hotkeys (see .vimrc). Based on urlextract.sh aboveterraform_registry_url_to_https.sh - converts one or more Terraform Registry URLs from tfr:// to https://registry.terraform.io/ formatterraform_registry_url_open.sh - opens the Terraform Registry URL given as a string arg, file or standard input in either tfr:// or https://registry.terraform.io/ formatcheckov_resource_*.sh - Checkov resource counts - useful to estimate Bridgecrew Cloud costs which are charged per resource:checkov_resource_count.sh - counts the number of resources Checkov is scanning in the current or given directorycheckov_resource_count_all.sh - counts the total number of resources Checkov is scanning across all given repo checkoutsoctopus_api.sh - queries the Octopus Deploy APISee also Knowledge Base notes for CI/CD.
ai/ and ipaas/ directories:
openai_api.sh - queries the OpenAI (ChatGPT) API with authenticationmake_api.sh - queries the Make.com API with authenticationinternet/ , cloudflare/ , pingdom/ , terraform/ directories:
pastebin.sh - uploads a file to https://pastebin.com, script auto-determines which syntax highlighting to add since API doesn't auto inferdpaste.sh - uploads a file to https://dpaste.com, script auto-determines which syntax highlighting to add since API doesn't auto infertermbin.sh - uploads a file to https://termbin.com (site has no syntax highlighting)0x0.sh - uploads a file to https://0x0.st (fast)imgur.sh - uploads an image file to https://imgur.comfile.io.sh - uploads a file to https://file.io with 2 weeks, single download retentioncatbox.sh - uploads a file to https://catbox.moe/ with permanent retention (slow)litterbox.sh - uploads a file to https://litterbox.catbox.moe/ with temporary retention (slow)digital_ocean_api.sh / doapi.sh - queries the Digital Ocean API with authenticationdoctl ( install/install_doctl.sh )atlassian_ip_ranges.sh - lists Atlassian's IPv4 and/or IPv6 cidr ranges via its APIcircleci_public_ips.sh - lists CircleCI public IP addresses via dnsjson.comcloudflare_*.sh - Cloudflare API queries and reports:cloudflare_api.sh - queries the Cloudflare API with authenticationcloudflare_ip_ranges.sh - lists Cloudflare's IPv4 and/or IPv6 cidr ranges via its APIcloudflare_custom_certificates.sh - lists any custom SSL certificates in a given Cloudflare zone along with their status and expiry datecloudflare_dns_records.sh - lists any Cloudflare DNS records for a zone, including the type and ttlcloudflare_dns_records_all_zones.sh - same as above but for all zonescloudflare_dns_record_create.sh - creates a DNS record in the given domaincloudflare_dns_record_update.sh - updates a DNS record in the given domaincloudflare_dns_record_delete.sh - deletes a DNS record in the given domaincloudflare_dns_record_details.sh - lists the details for a DNS record in the given domain in JSON format for further pipe processingcloudflare_dnssec.sh - lists the Cloudflare DNSSec status for all zonescloudflare_firewall_rules.sh - lists Cloudflare Firewall rules, optionally with filter expressioncloudflare_firewall_access_rules.sh - lists Cloudflare Firewall Access rules, optionally with filter expressioncloudflare_foreach_account.sh - executes a templated command for each Cloudflare account, replacing the {account_id} and {account_name} in each iteration (useful for chaining with cloudflare_api.sh )cloudflare_foreach_zone.sh - executes a templated command for each Cloudflare zone, replacing the {zone_id} and {zone_name} in each iteration (useful for chaining with cloudflare_api.sh , used by adjacent cloudflare_*_all_zones.sh scripts)cloudflare_purge_cache.sh - purges the entire Cloudflare cachecloudflare_ssl_verified.sh - gets the Cloudflare zone SSL verification status for a given zonecloudflare_ssl_verified_all_zones.sh - same as above for all zonescloudflare_zones.sh - lists Cloudflare zone names and IDs (needed for writing Terraform Cloudflare code)datadog_api.sh - queries the DataDog API with authenticationdnsjson.sh - queries dnsjson.com for DNS recordsgitguardian_api.sh - queries the GitGuardian API with authenticationjira_api.sh - queries Jira API with authenticationkong_api.sh - queries the Kong API Gateway's Admin API, handling authentication if enabledtraefik_api.sh - queries the Traefik API, handling authentication if enabledngrok_api.sh - queries the NGrok API with authenticationpingdom_*.sh - Pingdom API queries and reports for status, latency, average response times, latency averages by hour, SMS credits, outages periods and durations over the last year etc.pingdom_api.sh - queries the Solarwinds Pingdom API with authenticationpingdom_foreach_check.sh - executes a templated command against each Pingdom check, replacing the {check_id} and {check_name} in each iterationpingdom_checks.sh - show all Pingdom checks, status and latenciespingdom_checks_outages.sh / pingdom_checks_outages.sh - show one or all Pingdom checks outage histories for the last yearpingdom_checks_average_response_times.sh - shows the average response times for all Pingdom checks for the last weekpingdom_check_latency_by_hour.sh / pingdom_checks_latency_by_hour.sh - shows the average latency for one or all Pingdom checks broken down by hour of the day, over the last weekpingdom_sms_credits.sh - gets the remaining number of Pingdom SMS creditsterraform_cloud_api.sh - queries Terraform Cloud API with authenticationterraform_cloud_ip_ranges.sh - returns the list of IP ranges for Terraform Cloud via the API, or optionally one or more of the ranges used by different functionswordpress.sh - boots Wordpress in docker with a MySQL backend, and increases the upload_max_filesize to be able to restore a real world sized export backupwordpress_api.sh - queries the Wordpress API with authenticationwordpress_posts_without_category_tags.sh - checks posts (articles) for categories without corresponding tags and prints the posts and their missing tagsjava/ directory:
java_show_classpath.sh - shows Java classpaths, one per line, of currently running Java programsjvm_heaps*.sh - show all your Java heap sizes for all running Java processes, and their total MB (for performance tuning and sizing)java_decompile_jar.sh - decompiles a Java JAR in /tmp, finds the main class and runs a Java decompiler on its main .class file using jd_gui.shjd_gui.sh - runs Java Decompiler JD GUI, downloading its jar the first time if it's not already presentbytecode_viwer.sh - runs Bytecode-Viewer GUI Java decompiler, downloading its jar the first time if it's not already presentcfr.sh - runs CFR command line Java decompiler, downloading its jar the first time if it's not already presentprocyon.sh - runs Procyon command line Java decompiler, downloading its jar the first time if it's not already presentSee also Knowledge Base notes for Java and JVM Performance Tuning.
python/ directory:
python_compile.sh - byte-compiles Python scripts and libraries into .pyo optimized filespython_pip_install.sh - bulk installs PyPI modules from mix of arguments / file lists / stdin, accounting for User vs System installs, root vs user sudo, VirtualEnvs / Anaconda / GitHub Workflows/ Google Cloud Shell, Mac vs Linux library paths, and ignore failure optionpython_pip_install_if_absent.sh - installs PyPI modules not already in Python libary path (OS or pip installed) for faster installations only where OS packages are already providing some of the modules, reducing time and failure rates in CI buildspython_pip_install_for_script.sh - installs PyPI modules for given script(s) if not already installed. Used for dynamic individual script dependency installation in the DevOps Python tools repopython_pip_reinstall_all_modules.sh - reinstalls all PyPI modules which can fix some issuespythonpath.sh - prints all Python libary search paths, one per linepython_find_library_path.sh - finds directory where a PyPI module is installed - without args finds the Python library basepython_find_library_executable.sh - finds directory where a PyPI module's CLI program is installed (system vs user, useful when it gets installed to a place that isn't in your $PATH , where which won't help)python_find_unused_pip_modules.sh - finds PyPI modules that aren't used by any programs in the current directory treepython_find_duplicate_pip_requirements.sh - finds duplicate PyPI modules listed for install under the directory tree (useful for deduping module installs in a project and across submodules)python_translate_import_module.sh - converts Python import modules to PyPI module names, used by python_pip_install_for_script.shpython_translate_module_to_import.sh - converts PyPI module names to Python import names, used by python_pip_install_if_absent.sh and python_find_unused_pip_modules.shpython_pyinstaller.sh - creates PyInstaller self-contained Python programs with Python interpreter and all PyPI modules includedpython_pypi_versions.sh - prints all available versions of a given PyPi module using the APISee also Knowledge Base notes for Python.
perl/ directory:
perl_cpanm_install.sh - bulk installs CPAN modules from mix of arguments / file lists / stdin, accounting for User vs System installs, root vs user sudo, Perlbrew / Google Cloud Shell environments, Mac vs Linux library paths, ignore failure option, auto finds and reads build failure log for quicker debugging showing root cause error in CI builds logs etcperl_cpanm_install_if_absent.sh - installs CPAN modules not already in Perl libary path (OS or CPAN installed) for faster installations only where OS packages are already providing some of the modules, reducing time and failure rates in CI buildsperl_cpanm_reinstall_all.sh - re-installs all CPAN modules. Useful for trying to recompile XS modules on Macs after migration assistant from an Intel Mac to an ARM Silicon Mac leaves your home XS libraries broken as they're built for the wrong architectureperlpath.sh - prints all Perl libary search paths, one per lineperl_find_library_path.sh - finds directory where a CPAN module is installed - without args finds the Perl library baseperl_find_library_executable.sh - finds directory where a CPAN module's CLI program is installed (system vs user, useful when it gets installed to a place that isn't in your $PATH , where which won't help)perl_find_unused_cpan_modules.sh - finds CPAN modules that aren't used by any programs in the current directory treeperl_find_duplicate_cpan_requirements.sh - finds duplicate CPAN modules listed for install more than once under the directory tree (useful for deduping module installs in a project and across submodules)perl_generate_fatpacks.sh - creates Fatpacks - self-contained Perl programs with all CPAN modules built-inSee also Knowledge Base notes for Perl.
packages/ directory:
golang_install.sh - bulk installs Golang modules from mix of arguments / file lists / stdingolang_install_if_absent.sh - same as above but only if the package binary isn't already available in $PATHgolang_rm_binaries.sh - deletes binaries of the same name adjacent to .go files. Doesn't delete your bin/ etc as these are often real deployed applications rather than development binariesmedia/ directory:
image_join_vertical.sh - joins two images top and bottom after matching their widths so they align correctlyimage_join_horizontal.sh - joins two images left and right after matching their heights so they align correctlyimageopen.sh - opens the given image file using whatever available tool is found on Linux or Macsvg_to_png.sh - convert an SVG image to PNG to be usable on websites that don't support SVG images like LinkedIn, Medium or Redditavif_to_png.sh - convert an Avif image to PNG to be usable on websites that don't support Webp images like LinkedInwebp_to_png.sh - convert a Webp image to PNG to be usable on websites that don't support Webp images like Medium mp3_set_artist.sh / mp3_set_album.sh - set the artist / album tag for all mp3 files under given directories. Useful for grouping artists/albums and audiobook author/books (eg. for correct importing into Mac's Books.app)mp3_set_track_name.sh - set the track name metadata for mp3 files under given directories to follow their filenames. Useful for correctly displaying audiobook progress / chapters etc.mp3_set_track_order.sh - set the track order metadata for mp3 files under given directories to follow the lexical file naming order. Useful for correctly ordering album songs and audiobook chapters (eg. for Mac's Books.app). Especially useful for enforcing global ordering on multi-CD audiobooks after grouping into a single audiobook using mp3_set_album.sh (otherwise default track numbers in each CD interleave in Mac's Books.app) avi_to_mp4.sh - convert avi files to mp4 using ffmpeg. Useful to be able to play videos on devices like smart TVs that may not recognize newer codecs otherwisemkv_to_mp4.sh - convert mkv files to mp4 using ffmpeg. Same use case as aboveyoutube_download_channel.sh - downloads all videos from a given YouTube channel URLSee also Knowledge Base notes for MultiMedia.
40+ Spotify API scripts (used extensively to manage my Spotify-Playlists repo).
spotify/ directory:
spotify_playlists*.sh - list playlists in either <id> <name> or JSON formatspotify_playlist_tracks*.sh - gets playlist contents as track URIs / Artists - Track / CSV format - useful for backups or exports between music systemsspotify_backup.sh - backup all Spotify playlists as well as the ordered list of playlistsspotify_backup_playlist*.sh - backup Spotify playlists to local files in both human readable Artist - Track format and Spotify URI format for easy restores or adding to new playlistsspotify_search*.sh - search Spotify's library for tracks / albums / artists getting results in human readable format, JSON, or URI formats for easy loading to Spotify playlistsspotify_release_year.sh - searches for a given track or album and finds the original release yearspotify_uri_to_name.sh - convert Spotify track / album / artist URIs to human readable Artist - Track / CSV format. Takes Spotify URIs, URL links or just IDs. Reads URIs from files or standard inputspotify_create_playlist.sh - creates a Spotify playlist, either public or privatespotify_rename_playlist.sh - renames a Spotify playlistspotify_set_playlists_public.sh / spotify_set_playlists_private.sh - sets one or more given Spotify playlists to public / privatespotify_add_to_playlist.sh - adds tracks to a given playlist. Takes a playlist name or ID and Spotify URIs in any form from files or standard input. Can be combined with many other tools listed here which output Spotify URIs, or appended from other playlists. Can also be used to restore a spotify playlist from backupsspotify_delete_from_playlist.sh - deletes tracks from a given playlist. Takes a playlist name or ID and Spotify URIs in any form from files or standard input, optionally prefixed with a track position to remove only specific occurrences (useful for removing duplicates from playlists)spotify_delete_from_playlist_if_in_other_playlists.sh - deletes tracks from a given playlist if their URIs are found in the subsequently given playlistsspotify_delete_from_playlist_if_track_in_other_playlists.sh - deletes tracks from a given playlist if their 'Artist - Track' name match are found in the subsequently given playlists (less accurate than exact URI deletion above)spotify_duplicate_uri_in_playlist.sh - finds duplicate Spotify URIs in a given playlist (these are guaranteed exact duplicate matches), returns all but the first occurrence and optionally their track positions (zero-indexed to align with the Spotify API for easy chaining with other tools)spotify_duplicate_tracks_in_playlist.sh - finds duplicate Spotify tracks in a given playlist (these are idential Artist - Track name matches, which may be from different albums / singles)spotify_delete_duplicates_in_playlist.sh - deletes duplicate Spotify URI tracks (identical) in a given playlist using spotify_duplicate_uri_in_playlist.sh and spotify_delete_from_playlist.shspotify_delete_duplicate_tracks_in_playlist.sh - deletes duplicate Spotify tracks (name matched) in a given playlist using spotify_duplicate_tracks_in_playlist.sh and spotify_delete_from_playlist.shspotify_delete_any_duplicates_in_playlist.sh - calls both of the above scripts to first get rid of duplicate URIs and then remove any other duplicates by track name matchesspotify_playlist_tracks_uri_in_year.sh - finds track URIs in a playlist where their original release date is in a given year or decade (by regex match)spotify_playlist_uri_offset.sh - finds the offset of a given track URI in a given playlist, useful to find positions to resume processing a large playlistspotify_top_artists*.sh - lists your top artists in URI or human readable formatspotify_top_tracks*.sh - lists top tracks in URI or human readable formatspotify_liked_tracks*.sh - lists your Liked Songs in URI or human readable formatsspotify_liked_artists*.sh - list artists from Liked Songs in URI or human readable formatsspotify_artists_followed*.sh - lists all followed artists in URI or human readable formatsspotify_artist_tracks.sh - gets all track URIs for a given artist, from both albums and single for chain loading to playlistsspotify_follow_artists.sh - follows artists for the given URIs from files or standard inputspotify_follow_top_artists.sh - follows all artists in your current Spotify top artists listspotify_follow_liked_artists.sh - follows artists with N or more tracks in your Liked Songsspotify_set_tracks_uri_to_liked.sh - sets a list of spotify track URIs to 'Liked' so they appear in the Liked Songs playlist. Useful for marking all the tracks in your best playlists as favourite tracks, or for porting historical Starred tracks to the newer Liked Songsspotify_foreach_playlist.sh - executes a templated command against all playlists, replacing {playlist} and {playlist_id} in each iterationspotify_playlist_name_to_id.sh / spotify_playlist_id_to_name.sh - convert playlist names <=> IDsspotify_api_token.sh - gets a Spotify authentication token using either Client Credentials or Authorization Code authentication flows, the latter being able to read/modify private user data, automatically used by spotify_api.shspotify_api.sh - query any Spotify API endpoint with authentication, used by adjacent spotify scripts bin/ , install/ , packages/ , setup/ directories:
install/ - installation scripts for various OS packages (RPM, Deb, Apk) for various Linux distros (Redhat RHEL / CentOS / Fedora, Debian / Ubuntu, Alpine)packages/ - OS / Distro Package Management:install_packages.sh - installs package lists from arguments, files or stdin on major linux distros and Mac, detecting the package manager and invoking the right install commands, with sudo if not root. Works on RHEL / CentOS / Fedora, Debian / Ubuntu, Alpine, and Mac Homebrew. Leverages and supports all features of the distro / OS specific install scripts listed belowinstall_packages_if_absent.sh - installs package lists if they're not already installed, saving time and minimizing install logs / CI logs, same support list as aboveyum_install_packages.sh / yum_remove_packages.sh - installs RPM lists from arguments, files or stdin. Handles Yum + Dnf behavioural differences, calls sudo if not root, auto-attempts variations of python/python2/python3 package names. Avoids yum slowness by checking if rpm is installed before attempting to install it, accepts NO_FAIL=1 env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across RHEL/CentOS/Fedora versions)yum_install_packages_if_absent.sh - installs RPMs only if not already installed and not a metapackage provided by other packages (eg. vim metapackage provided by vim-enhanced ), saving time and minimizing install logs / CI logs, plus all the features of yum_install_packages.sh aboverpms_filter_installed.sh / rpms_filter_not_installed.sh - pipe filter packages that are / are not installed for easy script pipingapt_install_packages.sh / apt_remove_packages.sh - installs Deb package lists from arguments, files or stdin. Auto calls sudo if not root, accepts NO_FAIL=1 env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across Debian/Ubuntu distros/versions)apt_install_packages_if_absent.sh - installs Deb packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features of apt_install_packages.sh aboveapt_wait.sh - blocking wait on concurrent apt locks to avoid failures and continue when available, mimicking yum's waiting behaviour rather than error'ing outdebs_filter_installed.sh / debs_filter_not_installed.sh - pipe filter packages that are / are not installed for easy script pipingapk_install_packages.sh / apk_remove_packages.sh - installs Alpine apk package lists from arguments, files or stdin. Auto calls sudo if not root, accepts NO_FAIL=1 env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across Alpine versions)apk_install_packages_if_absent.sh - installs Alpine apk packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features of apk_install_packages.sh aboveapk_filter_installed.sh / apk_filter_not_installed.sh - pipe filter packages that are / are not installed for easy script pipingbrew_install_packages.sh / brew_remove_packages.sh - installs Mac Hombrew package lists from arguments, files or stdin. Accepts NO_FAIL=1 env var to ignore unavailable / changed package names (useful for optional packages or attempts for different package names across versions)brew_install_packages_if_absent.sh - installs Mac Homebrew packages only if not already installed, saving time and minimizing install logs / CI logs, plus all the features of brew_install_packages.sh abovebrew_filter_installed.sh / brew_filter_not_installed.sh - pipe filter packages that are / are not installed for easy script pipingbrew_package_owns.sh - finds which brew package owns a given filename argumentmake system-packages before make pip / make cpan to shorten how many packages need installing, reducing chances of build failures bin/ , checks/ , cicd/ or language specific directories:
lint.sh - lints one or more files, auto-determines the file types, parses lint headers and calls appropriate scripts and tools. Integrated with my custom .vimrc
run.sh - runs one or more files, auto-determines the file types, any run or arg headers and executes each file using the appropriate script or CLI tool. Integrated with my custom .vimrc
check_*.sh - extensive collection of generalized tests - these run against all my GitHub repos via CI.一些例子:
Programming language linting:
Build System, Docker & CI linting:
Optional, only if you don't do the full make install .
Install only OS system package dependencies and AWS CLI via Python Pip (doesn't symlink anything to $HOME ):
make Adds sourcing to .bashrc and .bash_profile and symlinks dot config files to $HOME (doesn't install OS system package dependencies):
make linkundo via
make unlinkInstall only OS system package dependencies (doesn't include AWS CLI or Python packages):
make system-packagesInstall AWS CLI:
make awsInstall Azure CLI:
make azureInstall GCP GCloud SDK (includes CLI):
make gcpInstall GCP GCloud Shell environment (sets up persistent OS packages and all home directory configs):
make gcp-shellInstall generically useful Python CLI tools and modules (includes AWS CLI, autopep8 etc):
make python > make help
Usage:
Common Options:
make help show this message
make build installs all dependencies - OS packages and any language libraries via native tools eg. pip, cpanm, gem, go etc that are not available via OS packages
make build-retry retries ' make build ' x 3 until success to try to mitigate temporary upstream repo failures triggering false alerts in CI systems
make ci prints env, then runs ' build-retry ' for more resilient CI builds with debugging
make printenv prints environment variables, CPU cores, OS release, $PWD , Git branch, hashref etc. Useful for CI debugging
make system-packages installs OS packages only (detects OS via whichever package manager is available)
make test run tests
make clean removes compiled / generated files, downloaded tarballs, temporary files etc.
make submodules initialize and update submodules to the right release (done automatically by build / system-packages)
make init same as above, often useful to do in CI systems to get access to additional submodule provided targets such as ' make ci '
make cpan install any modules listed in any cpan-requirements.txt files if not already installed
make pip install any modules listed in any requirements.txt files if not already installed
make python-compile compile any python files found in the current directory and 1 level of subdirectory
make pycompile
make github open browser at github project
make readme open browser at github ' s README
make github-url print github url and copy to clipboard
make status open browser at Github CI Builds overview Status page for all projects
make ls print list of code files in project
make wc show counts of files and lines
Repo specific options:
make install builds all script dependencies, installs AWS CLI, symlinks all config files to $HOME and adds sourcing of bash profile
make link symlinks all config files to $HOME and adds sourcing of bash profile
make unlink removes all symlinks pointing to this repo ' s config files and removes the sourcing lines from .bashrc and .bash_profile
make python-desktop installs all Python Pip packages for desktop workstation listed in setup/pip-packages-desktop.txt
make perl-desktop installs all Perl CPAN packages for desktop workstation listed in setup/cpan-packages-desktop.txt
make ruby-desktop installs all Ruby Gem packages for desktop workstation listed in setup/gem-packages-desktop.txt
make golang-desktop installs all Golang packages for desktop workstation listed in setup/go-packages-desktop.txt
make nodejs-desktop installs all NodeJS packages for desktop workstation listed in setup/npm-packages-desktop.txt
make desktop installs all of the above + many desktop OS packages listed in setup/
make mac-desktop all of the above + installs a bunch of major common workstation software packages like Ansible, Terraform, MiniKube, MiniShift, SDKman, Travis CI, CCMenu, Parquet tools etc.
make linux-desktop
make ls-scripts print list of scripts in this project, ignoring code libraries in lib/ and .bash.d/
make github-cli installs GitHub CLI
make kubernetes installs Kubernetes kubectl and kustomize to ~ /bin/
make terraform installs Terraform to ~ /bin/
make vim installs Vundle and plugins
make tmux installs TMUX TPM and plugin for kubernetes context
make ccmenu installs and (re)configures CCMenu to watch this and all other major HariSekhon GitHub repos
make status open the Github Status page of all my repos build statuses across all CI platforms
make aws installs AWS CLI tools
make azure installs Azure CLI
make gcp installs Google Cloud SDK
make digital-ocean installs Digital Ocean CLI
make aws-shell sets up AWS Cloud Shell: installs core packages and links configs
(maintains itself across future Cloud Shells via .aws_customize_environment hook)
make gcp-shell sets up GCP Cloud Shell: installs core packages and links configs
(maintains itself across future Cloud Shells via .customize_environment hook)
make azure-shell sets up Azure Cloud Shell (limited compared to gcp-shell, doesn ' t install OS packages since there is no sudo)
Now exiting usage help with status code 3 to explicitly prevent silent build failures from stray ' help ' arguments
make: *** [help] Error 3 ( make help exits with error code 3 like most of my programs to differentiate from build success to make sure a stray help argument doesn't cause silent build failure with exit code 0)
git.io/bash-tools
The rest of my original source repos are here.
Pre-built Docker images are available on my DockerHub.