diff --git a/.gitlab/issue_templates/Access Request.md b/.gitlab/issue_templates/Access Request.md index 1a7b224d6ccdad95fef69b5c8be1ce2b543f338e..2e3e8b7854eadd44949a59c36f367e9ebddeb6f8 100644 --- a/.gitlab/issue_templates/Access Request.md +++ b/.gitlab/issue_templates/Access Request.md @@ -13,4 +13,7 @@ The access level should be: - [ ] All accounts have been provided the necessary accesses + + + /label ~"Access" ~"To Do" \ No newline at end of file diff --git a/.gitlab/issue_templates/Application - Archive.md b/.gitlab/issue_templates/Application - Archive.md index 9f3b5fe4d8d43ae9f82411a391b200d4b43f2668..03042760908cfaf04f1da1ed0f44e42f6ba9aedd 100644 --- a/.gitlab/issue_templates/Application - Archive.md +++ b/.gitlab/issue_templates/Application - Archive.md @@ -17,5 +17,7 @@ Requesting this application be archived due to one of the following reasons: - [ ] Iron Bank frontend no longer lists application as available or approved -/label ~"Container::Archive" -/cc @ironbank-notifications/archive \ No newline at end of file + + + +/label ~"Container::Archive" \ No newline at end of file diff --git a/.gitlab/issue_templates/Application - Initial.md b/.gitlab/issue_templates/Application - Initial.md index 6594a0580b941815c0c7c6264cdfc42e28231f57..7ddab914be32ba1a5b110458609e7b178e7b75be 100644 --- a/.gitlab/issue_templates/Application - Initial.md +++ b/.gitlab/issue_templates/Application - Initial.md @@ -7,26 +7,69 @@ Requesting application to be hardened. This is only for initial hardening of a c Current version: (State the current version of the application as you see it) -Under support: (Is the updated version within the same major version of the application or is this a new major version?) +## Communication + +All communication should occur through this issue. This ensures that all information is documented in a centralized location and also ensures that all of the assignees are notified of updates. It is imperative that all required parties are listed as assignees of this issue and that individuals are not removed. Please do not remove anyone from the assignee list. + +If you need to contact the Container Hardening team, please identify your assigned point of contact. You can find your point of contact by: +1. They should be listed as an assignee on this ticket +2. They should be listed in the `hardening_manifest.yaml` file under the `maintainers` section with a field of `cht_member: true` + +If you have no assignee, feel free to tag Container Hardening leadership in your issue by commenting on this issue with your questions/concerns and then add `/cc @ironbank-notifications/leadership`. Gitlab will automatically notify all Container Hardening leadership to look at this issue and respond. + + +## Responsibilities + +If this application is owned by a Contributor or Vendor (identifed as `Owner::Contributor` and `Owner::Vendor` respectively), then it is your responsibility to drive this issue through completion. This means that the Container Hardening team is not here to help push any deadlines/timeframes you may have with other customers or DoD agencies. If you have issues with the activity, you may notify Container Hardening leadership above. Do not change the ownership labels. ## Definition of Done + Hardening: -- [ ] Container builds successfully -- [ ] Greylist file has been created (requires a member from container hardening) +- [ ] Hardening manifest is created and adheres to the schema (https://repo1.dsop.io/ironbank-tools/ironbank-pipeline/-/blob/master/schema/hardening_manifest.schema.json) +- [ ] Container builds successfully through the Gitlab CI pipeline - [ ] Branch has been merged into `development` +- [ ] Project is configured for automatic renovate updates (if possible) Justifications: - [ ] All findings have been justified per the above documentation -- [ ] Justifications have been provided to the container hardening team +- [ ] Justifications have been attached to this issue +- [ ] Apply the label `Approval` to indicate this container is ready for the approval phase + +Note: The justifications must be provided in a timely fashion. Failure to do so could result in new findings being identified which may start this process over. -Approval Process (container hardening team processes): +Approval Process (Container Hardening Team processes): - [ ] Peer review from Container Hardening Team - [ ] Findings Approver has reviewed and approved all justifications - [ ] Approval request has been sent to Authorizing Official - [ ] Approval request has been processed by Authorizing Official +Note: If the above approval process is kicked back for any reason, the `Approval` label will be removed and the issue will be sent back to `Open`. Any comments will be listed in this issue for you to address. Once they have been addressed, you may re-add the `Approval` label. + +## Post Approval + +### Continuous Monitoring + +Once a container is approved, the `Approved` label will be applied to this issue and it will be closed. You will be able to find your applications on http://ironbank.dsop.io and https://registry1.dsop.io. + +In addition to the above, your application will now be subscribed to continuous monitoring. This means that any new findings discovered as part of this will need justifications. To satisfy this process, any new findings will trigger a new Gitlab issue in this project with the label `Container::New Findings`. All members listed in the `maintainers` section of the `hardening_manifest.yaml` file will automatically be assigned. It is your responsibility as a Contributor or Vendor to monitor for this and provide justifications in a timely fashion. This newly created issue will have all the instructions necessary to complete the process. Failure to provide justifications could result in the revocation of the application's approval status. + +### Updates + +It is imperative that application updates be submitted as quickly as possible. We do not want applications to become stale. To help with this process, Ironbank recommends using a tool called [Renovate](https://github.com/renovatebot/renovate). This requires a `renovate.json` file to be placed in your project and can automate the creation of issues and merge requests. + +If not using Renovate, it will be up to you as a Contributor or Vendor to keep this image up-to-date at all times. When you wish to submit an application update, you must create a new issue in this project using the `Application - Update` template and associate it with the corresponding merge request. If you submit a merge request alone, work will not proceed until a related issue is created. These issues are tracked using the label `Container::Update`. + +Additionally, it is imperative that all updates must be followed through to completion. Simply submitting an application update but not following through on justifications and approvals does not suffice and risk your application's approval status being revoked. + +### Bugs + +Occassionally, users may file bug reports for your application. It is your responsibility to monitor for these since they are created inside your project repository. Assignees will automatically be populated by the `members` section of the `hardening_manifest.yaml` file and will have the label `Bug`. + + + + + -/label ~"Container::Initial" -/cc @ironbank-notifications/cht \ No newline at end of file +/label ~"Container::Initial" \ No newline at end of file diff --git a/.gitlab/issue_templates/Application - Update.md b/.gitlab/issue_templates/Application - Update.md index caebb3e9aab279c7f109ec0fbfa246b8add6d972..569e75d8a4ca8586187f741c405fbfbcf02bb309 100644 --- a/.gitlab/issue_templates/Application - Update.md +++ b/.gitlab/issue_templates/Application - Update.md @@ -13,15 +13,38 @@ Updated version: (State the version you would like the application updated to) Under support: (Is the updated version within the same major version of the application or is this a new major version?) +## Communication + +All communication should occur through this issue. This ensures that all information is documented in a centralized location and also ensures that all of the assignees are notified of updates. It is imperative that all required parties are listed as assignees of this issue and that individuals are not removed. Please do not remove anyone from the assignee list. + +If you need to contact the Container Hardening team, please identify your assigned point of contact. You can find your point of contact by: +1. They should be listed as an assignee on this ticket +2. They should be listed in the `hardening_manifest.yaml` file under the `maintainers` section with a field of `cht_member: true` + +If you have no assignee, feel free to tag Container Hardening leadership in your issue by commenting on this issue with your questions/concerns and then add `/cc @ironbank-notifications/leadership`. Gitlab will automatically notify all Container Hardening leadership to look at this issue and respond. + + +## Responsibilities + +If this application is owned by a Contributor or Vendor (identifed as `Owner::Contributor` and `Owner::Vendor` respectively), then it is your responsibility to drive this issue through completion. This means that the Container Hardening team is not here to help push any deadlines/timeframes you may have with other customers or DoD agencies. If you have issues with the activity, you may notify Container Hardening leadership above. Do not change the ownership labels. + + ## Definition of Done Hardening: -- [ ] Container builds successfully -- [ ] Container version has been updated in greylist file +- [ ] Hardening manifest has been updated and adheres to the schema (https://repo1.dsop.io/ironbank-tools/ironbank-pipeline/-/blob/master/schema/hardening_manifest.schema.json) +- [ ] Container builds successfully throughthe Gitlab CI pipeline - [ ] Branch has been merged into `development` +- [ ] Project is configured for automatic renovate updates (if possible) + +No new findings: +- [ ] There are no new findings in this update. Skip the Justifications and Approval Process steps and apply the label `Approval` Justifications: - [ ] All findings have been justified per the above documentation - [ ] Justifications have been provided to the container hardening team +- [ ] Skip the Justifications and Approval Process steps and apply the label `Approval` + +Note: The justifications must be provided in a timely fashion. Failure to do so could result in new findings being identified which may start this process over. Approval Process: - [ ] Peer review from Container Hardening Team @@ -29,7 +52,31 @@ Approval Process: - [ ] Approval request has been sent to Authorizing Official - [ ] Approval request has been processed by Authorizing Official +Note: If the above approval process is kicked back for any reason, the `Approval` label will be removed and the issue will be sent back to `Open`. Any comments will be listed in this issue for you to address. Once they have been addressed, you may re-add the `Approval` label. + + +## Post Approval + +### Continuous Monitoring + +Once a container is approved, the `Approved` label will be applied to this issue and it will be closed. You will be able to find your applications on http://ironbank.dsop.io and https://registry1.dsop.io. + +In addition to the above, your application will now be subscribed to continuous monitoring. This means that any new findings discovered as part of this will need justifications. To satisfy this process, any new findings will trigger a new Gitlab issue in this project with the label `Container::New Findings`. All members listed in the `maintainers` section of the `hardening_manifest.yaml` file will automatically be assigned. It is your responsibility as a Contributor or Vendor to monitor for this and provide justifications in a timely fashion. This newly created issue will have all the instructions necessary to complete the process. Failure to provide justifications could result in the revocation of the application's approval status. + +### Updates + +It is imperative that application updates be submitted as quickly as possible. We do not want applications to become stale. To help with this process, Ironbank recommends using a tool called [Renovate](https://github.com/renovatebot/renovate). This requires a `renovate.json` file to be placed in your project and can automate the creation of issues and merge requests. + +If not using Renovate, it will be up to you as a Contributor or Vendor to keep this image up-to-date at all times. When you wish to submit an application update, you must create a new issue in this project using the `Application - Update` template and associate it with the corresponding merge request. If you submit a merge request alone, work will not proceed until a related issue is created. These issues are tracked using the label `Container::Update`. + +Additionally, it is imperative that all updates must be followed through to completion. Simply submitting an application update but not following through on justifications and approvals does not suffice and risk your application's approval status being revoked. + +### Bugs + +Occassionally, users may file bug reports for your application. It is your responsibility to monitor for these since they are created inside your project repository. Assignees will automatically be populated by the `members` section of the `hardening_manifest.yaml` file and will have the label `Bug`. + + + -/label ~"Container::Update" -/cc @ironbank-notifications/updates \ No newline at end of file +/label ~"Container::Update" \ No newline at end of file diff --git a/.gitlab/issue_templates/Bug.md b/.gitlab/issue_templates/Bug.md index 1427a0caed1833bccd3b1e5f8c5f6eafde05266c..069eaf011d42a47b30c06a6e532be5f191b063f7 100644 --- a/.gitlab/issue_templates/Bug.md +++ b/.gitlab/issue_templates/Bug.md @@ -33,5 +33,9 @@ logs, and code as it's very hard to read otherwise.) - [ ] Bug has been identified and corrected within the container -/label ~Bug -/cc @ironbank-notifications/bug \ No newline at end of file + + + + + +/label ~Bug \ No newline at end of file diff --git a/.gitlab/issue_templates/Feature Request.md b/.gitlab/issue_templates/Feature Request.md index a0e2f195dc66e4187264381c5e96e8aa96db8a09..aad067130061ed0a1f0f82def17a6d74597d1184 100644 --- a/.gitlab/issue_templates/Feature Request.md +++ b/.gitlab/issue_templates/Feature Request.md @@ -28,5 +28,9 @@ - [ ] Feature has been implemented -/label ~Feature -/cc @ironbank-notifications/feature \ No newline at end of file + + + + + +/label ~Feature \ No newline at end of file diff --git a/.gitlab/issue_templates/Leadership Question.md b/.gitlab/issue_templates/Leadership Question.md index 4674f82f930085f34f51b4ecbb4d396519f53192..b2cf9e5ed349dfa03f6099b19cb92c8c9ca3ba36 100644 --- a/.gitlab/issue_templates/Leadership Question.md +++ b/.gitlab/issue_templates/Leadership Question.md @@ -3,5 +3,10 @@ (Detailed description of the question you'd like to ask the leadership team) + + + + + /label ~"Question::Leadership" ~"To Do" /cc @ironbank-notifications/leadership \ No newline at end of file diff --git a/.gitlab/issue_templates/New Findings.md b/.gitlab/issue_templates/New Findings.md index 068d029d89cb62dd4d4da5e03924c608172d97d6..867f8325e650bae70a6bbb793cd7ff304b015681 100644 --- a/.gitlab/issue_templates/New Findings.md +++ b/.gitlab/issue_templates/New Findings.md @@ -8,13 +8,20 @@ Container has new findings discovered during continuous monitoring. Justifications: - [ ] All findings have been justified - [ ] Justifications have been provided to the container hardening team +- [ ] `Approval` label has been applied + +Note: The justifications must be provided in a timely fashion. Failure to do so could result in new findings being identified which may start this process over. Approval Process: - [ ] Findings Approver has reviewed and approved all justifications - [ ] Approval request has been sent to Authorizing Official - [ ] Approval request has been processed by Authorizing Official +Note: If the above approval process is kicked back for any reason, the `Approval` label will be removed and the issue will be sent back to `Open`. Any comments will be listed in this issue for you to address. Once they have been addressed, you may re-add the `Approval` label. + + + + -/label ~"Container::New Findings" -/cc @ironbank-notifications/security \ No newline at end of file +/label ~"Container::New Findings" \ No newline at end of file diff --git a/.gitlab/issue_templates/Onboarding Question.md b/.gitlab/issue_templates/Onboarding Question.md index 77dea11e56c87d3fb65a1cf2ce7901621058f970..ae8011ecfe1e0b95ed4c5658c122d47e21b89b1a 100644 --- a/.gitlab/issue_templates/Onboarding Question.md +++ b/.gitlab/issue_templates/Onboarding Question.md @@ -3,5 +3,10 @@ (Detailed description of the question you'd like to ask the onboarding team) + + + + + /label ~"Question::Onboarding" ~"To Do" /cc @ironbank-notifications/onboarding \ No newline at end of file diff --git a/.gitlab/issue_templates/Pipeline Failure.md b/.gitlab/issue_templates/Pipeline Failure.md index 28b82a9454358a542efaa4b9c1c99542e3487fd6..36aa982dd3d0710f880e30138e42bbec870a7e94 100644 --- a/.gitlab/issue_templates/Pipeline Failure.md +++ b/.gitlab/issue_templates/Pipeline Failure.md @@ -27,5 +27,10 @@ - [ ] Pipeline failure has been resolved -/label ~Pipeline -/cc @ironbank-notifications/pipelines \ No newline at end of file + + + + + + +/label ~Pipeline \ No newline at end of file diff --git a/Dockerfile b/Dockerfile new file mode 100644 index 0000000000000000000000000000000000000000..be68af3d59ef87cd2fe367c02bce130f7408c2c3 --- /dev/null +++ b/Dockerfile @@ -0,0 +1,33 @@ +ARG BASE_REGISTRY=registry1.dso.mil +ARG BASE_IMAGE=ironbank/redhat/openjdk/openjdk11 +ARG BASE_TAG=1.11 + +FROM tchiotludo/akhq:0.18.0 as base + +FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG} + +WORKDIR /app + +USER 0 + +COPY --from=base /app . +COPY --from=base /usr/local/bin/ /usr/local/bin/ +COPY ./config . +COPY ./scripts/akhq . + +ENV MICRONAUT_CONFIG_FILES=/app/application.yml + +RUN chown -R 1001 /app + +RUN dnf upgrade -y && \ + dnf clean all && \ + rm -rf /var/cache/dnf && \ + chmod +x /app/akhq + +ENTRYPOINT ["docker-entrypoint.sh"] + +CMD ["./akhq"] + +USER 1001 + +HEALTHCHECK NONE diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..9c8f3ea0871e0bfe81da0fa6e7c1d7d156dc380e --- /dev/null +++ b/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. \ No newline at end of file diff --git a/README.md b/README.md index f2a2b8884b310e2e5d2dfcdbbe68aab6314f088a..94540e022843b457ed8183d1edc89f358de798d8 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,662 @@ -# master-project-template +# AKHQ (previously known as KafkaHQ) +## Contents -Project template for all Iron Bank container repositories. \ No newline at end of file +- [Features](#features) +- [Quick Preview](#quick-preview) +- [Installation](#installation) + - [Docker](#docker) + - [Stand Alone](#stand-alone) + - [Kubernetes using Helm](#running-in-kubernetes-using-a-helm-chart) +- [Configuration](#configuration) + - [JVM.options file](#run-with-another-jvmoptions-file) + - [Kafka cluster](#kafka-cluster-configuration) + - [AKHQ](#akhq-configuration) + - [Security](#security) + - [Server](#server) + - [Micronaut](#micronaut-configuration) +- [Api](#api) +- [Monitoring Endpoint](#monitoring-endpoint) +- [Development Environment](#development-environment) +- [Schema references](#schema-references) +- [Who's using AKHQ](#whos-using-akhq) + + +## Features + +- **General** + - Works with modern Kafka cluster (1.0+) + - Connection on standard or ssl, sasl cluster + - Multi cluster +- **Topics** + - List + - Configurations view + - Partitions view + - ACLS view + - Consumers groups assignments view + - Node leader & assignments view + - Create a topic + - Configure a topic + - Delete a topic +- **Browse Topic datas** + - View data, offset, key, timestamp & headers + - Automatic deserializarion of avro message encoded with schema registry + - Configurations view + - Logs view + - Delete a record + - Empty a Topic (Delete all the record from one topic) + - Sort view + - Filter per partitions + - Filter with a starting time + - Filter data with a search string +- **Consumer Groups** (only with kafka internal storage, not with old Zookeeper) + - List with lag, topics assignments + - Partitions view & lag + - ACLS view + - Node leader & assignments view + - Display active and pending consumers groups + - Delete a consumer group + - Update consumer group offsets to start / end / timestamp +- **Schema Registry** + - List schema + - Create / Update / Delete a schema + - View and delete individual schema version +- **Connect** + - List connect definition + - Create / Update / Delete a definition + - Pause / Resume / Restart a definition or a task +- **Nodes** + - List + - Configurations view + - Logs view + - Configure a node +- **ACLS** + - List principals + - List principals topic & group acls +- **Authentification and Roles** + - Read only mode + - BasicHttp with roles per user + - User groups configuration + - Filter topics with regexp for current groups + - Ldap configuration to match AKHQ groups/roles + +## New React UI + +Since this is a major rework, the new UI can have some issues, so please [report any issue](https://github.com/tchiotludo/akhq/issues), thanks! + +## Quick preview +* Download [docker-compose.yml](https://raw.githubusercontent.com/tchiotludo/akhq/master/docker-compose.yml) file +* run `docker-compose pull` to be sure to have the last version of AKHQ +* run `docker-compose up` +* go to [http://localhost:8080](http://localhost:8080) + +It will start a Kafka node, a Zookeeper node, a Schema Registry, a Connect, fill with some sample data, start a consumer +group and a kafka stream & start AKHQ. + +## Installation + +First you need a [configuration files](#configuration) in order to configure AKHQ connections to Kafka Brokers. + +### Docker + +```sh +docker run -d \ + -p 8080:8080 \ + -v /tmp/application.yml:/app/application.yml \ + tchiotludo/akhq +``` +* With `-v /tmp/application.yml` must be an absolute path to configuration file +* Go to + + +### Stand Alone +* Install Java 11 +* Download the latest jar on [release page](https://github.com/tchiotludo/akhq/releases) +* Create an [configuration files](#configuration) +* Launch the application with `java -Dmicronaut.config.files=/path/to/application.yml -jar akhq.jar` +* Go to + + +### Running in Kubernetes (using a Helm Chart) + +### Using Helm repository + +* Add the AKHQ helm charts repository: +```sh +helm repo add akhq https://akhq.io/ +``` +* Install or upgrade +```sh +helm upgrade --install akhq akhq/akhq +``` +#### Requirements + +* Chart version >=0.1.1 requires Kubernetes version >=1.14 +* Chart version 0.1.0 works on previous Kubernetes versions +```sh +helm install akhq akhq/akhq --version 0.1.0 +``` + +### Using git +* Clone the repository: +```sh +git clone https://github.com/tchiotludo/akhq && cd akhq/deploy/helm/akhq +``` +* Update helm values located in [deploy/helm/values.yaml](helm/akhq/values.yaml) + * `configuration` values will contains all related configuration that you can find in [application.example.yml](application.example.yml) and will be store in a `ConfigMap` + * `secrets` values will contains all sensitive configurations (with credentials) that you can find in [application.example.yml](application.example.yml) and will be store in `Secret` + * Both values will be merged at startup +* Apply the chart: +```sh +helm install --name=akhq-release-name . +``` + + +## Configuration +Configuration file can by default be provided in either Java properties, YAML, JSON or Groovy files. YML Configuration +file example can be found here :[application.example.yml](application.example.yml) + +### Pass custom Java opts + +By default, the docker container will allow a custom jvn options setting the environnments vars `JAVA_OPTS`. +For example, if you want to change the default timezome, just add `-e "JAVA_OPTS=-Duser.timezone=Europe/Paris"` + +### Run with another jvm.options file + +By default, the docker container will run with a [jvm.options](docker/app/jvm.options) file, you can override it with +your own with an Environment Variable. With the `JVM_OPTS_FILE` environment variable, you can override the jvm.options file by passing +the path of your file instead. + +Override the `JVM_OPTS_FILE` with docker run: + +```sh +docker run -d \ + --env JVM_OPTS_FILE={{path-of-your-jvm.options-file}} + -p 8080:8080 \ + -v /tmp/application.yml:/app/application.yml \ + tchiotludo/akhq +``` + +Override the `JVM_OPTS_FILE` with docker-compose: + +```yaml +version: '3.7' +services: + akhq: + image: tchiotludo/akhq-jvm:dev + environment: + JVM_OPTS_FILE: /app/jvm.options + ports: + - "8080:8080" + volumes: + - /tmp/application.yml:/app/application.yml +``` + +If you do not override the `JVM_OPTS_FILE`, the docker container will take the defaults one instead. + +### Kafka cluster configuration +* `akhq.connections` is a key value configuration with : + * `key`: must be an url friendly (letter, number, _, -, ... dot are not allowed here) string the identify your cluster (`my-cluster-1` and `my-cluster-2` is the example above) + * `properties`: all the configurations found on [Kafka consumer documentation](https://kafka.apache.org/documentation/#consumerconfigs). Most important is `bootstrap.servers` that is a list of host:port of your Kafka brokers. + * `schema-registry`: *(optional)* + * `url`: the schema registry url + * `basic-auth-username`: schema registry basic auth username + * `basic-auth-password`: schema registry basic auth password + * `properties`: all the configurations for registry client, especially ssl configuration + * `connect`: *(optional list, define each connector as a element of a list)* + * `name`: connect name + * `url`: connect url + * `basic-auth-username`: connect basic auth username + * `basic-auth-password`: connect basic auth password + * `ssl-trust-store`: /app/truststore.jks + * `ssl-trust-store-password`: trust-store-password + * `ssl-key-store`: /app/truststore.jks + * `ssl-key-store-password`: key-store-password + +#### SSL Kafka Cluster with basic auth +Configuration example for kafka cluster secured by ssl for saas provider like aiven (full https & basic auth): + +You need to generate a jks & p12 file from pem, cert files give by saas provider. +```bash +openssl pkcs12 -export -inkey service.key -in service.cert -out client.keystore.p12 -name service_key +keytool -import -file ca.pem -alias CA -keystore client.truststore.jks +``` + +Configurations will look like this example: + +```yaml +akhq: + connections: + ssl-dev: + properties: + bootstrap.servers: "{{host}}.aivencloud.com:12835" + security.protocol: SSL + ssl.truststore.location: {{path}}/avnadmin.truststore.jks + ssl.truststore.password: {{password}} + ssl.keystore.type: "PKCS12" + ssl.keystore.location: {{path}}/avnadmin.keystore.p12 + ssl.keystore.password: {{password}} + ssl.key.password: {{password}} + schema-registry: + url: "https://{{host}}.aivencloud.com:12838" + basic-auth-username: avnadmin + basic-auth-password: {{password}} + properties: {} + connect: + - name: connect-1 + url: "https://{{host}}.aivencloud.com:{{port}}" + basic-auth-username: avnadmin + basic-auth-password: {{password}} +``` + +### AKHQ configuration + +#### Pagination +* `akhq.pagination.page-size` number of topics per page (default : 25) + +#### Topic List +* `akhq.topic.default-view` is default list view (ALL, HIDE_INTERNAL, HIDE_INTERNAL_STREAM, HIDE_STREAM) +* `akhq.topic.internal-regexps` is list of regexp to be considered as internal (internal topic can't be deleted or updated) +* `akhq.topic.stream-regexps` is list of regexp to be considered as internal stream topic + +#### Topic creation default values + +These parameters are the default values used in the topic creation page. + +* `akhq.topic.retention` Default retention in ms +* `akhq.topic.replication` Default number of replica to use +* `akhq.topic.partition` Default number of partition + +#### Topic Data +* `akhq.topic-data.sort`: default sort order (OLDEST, NEWEST) (default: OLDEST) +* `akhq.topic-data.size`: max record per page (default: 50) +* `akhq.topic-data.poll-timeout`: The time, in milliseconds, spent waiting in poll if data is not available in the + buffer (default: 1000). + + +### Security +* `akhq.security.default-group`: Default group for all the user even unlogged user. +By default, the default group is `admin` and allow you all read / write access on the whole app. + +By default, security & roles is enabled by default but anonymous user have full access. You can completely disabled +security with `micronaut.security.enabled: false`. + +If you need a read-only application, simply add this to your configuration files : +```yaml +akhq: + security: + default-group: reader +``` + + + +#### Auth + +##### JWT + +AKHQ uses JWT tokens to perform authentication. +Please generate a secret that is at least 256 bits and change the config like this: + +```yaml +micronaut: + security: + enabled: true + token: + jwt: + signatures: + secret: + generator: + secret: +``` + +##### Groups + +Groups allow you to limit user + +Define groups with specific roles for your users +* `akhq.security.default-group`: Default group for all the user even unlogged user + +* `akhq.security.groups`: Groups list definition + * `- name: group-name` Group identifier + * `roles`: Roles list for the group + * `attributes.topics-filter-regexp`: Regexp to filter topics available for current group + * `attributes.connects-filter-regexp`: Regexp to filter Connect tasks available for current group + + +3 defaults group are available : +- `admin` with all right +- `reader` with only read acces on all AKHQ +- `no-roles` without any roles, that force user to login + +##### Basic Auth +* `akhq.security.basic-auth`: List user & password with affected roles + * `- username: actual-username`: Login of the current user as a yaml key (may be anything email, login, ...) + * `password`: Password in sha256 (default) or bcrypt. The password can be converted + * For default SHA256, with command `echo -n "password" | sha256sum` or Ansible filter `{{ 'password' | hash('sha256') }}` + * For BCrypt, with Ansible filter `{{ 'password' | password_hash('blowfish') }}` + * `passwordHash`: Password hashing algorithm, either `SHA256` or `BCRYPT` + * `groups`: Groups for current user + +> Take care that basic auth will use session store in server **memory**. If your instance is behind a reverse proxy or a +> loadbalancer, you will need to forward the session cookie named `SESSION` and / or use +> [sesssion stickiness](https://en.wikipedia.org/wiki/Load_balancing_(computing)#Persistence) + +Configure basic-auth connection in AKHQ +```yaml +akhq.security: + basic-auth: + - username: admin + password: "$2a$" + passwordHash: BCRYPT + groups: + - admin + - username: reader + password: "" + groups: + - reader +``` + +##### LDAP +Configure how the ldap groups will be matched in AKHQ groups +* `akhq.security.ldap.groups`: Ldap groups list + * `- name: ldap-group-name`: Ldap group name (same name as in ldap) + * `groups`: AKHQ group list to be used for current ldap group + +Example using [online ldap test server](https://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/) + +Configure ldap connection in micronaut +```yaml +micronaut: + security: + ldap: + default: + enabled: true + context: + server: 'ldap://ldap.forumsys.com:389' + managerDn: 'cn=read-only-admin,dc=example,dc=com' + managerPassword: 'password' + search: + base: "dc=example,dc=com" + groups: + enabled: true + base: "dc=example,dc=com" +``` + +If you want to enable anonymous auth to your LDAP server you can pass : +```yaml +managerDn: '' +managerPassword: '' +``` + +Debuging ldap connection can be done with +```bash +curl -i -X POST -H "Content-Type: application/json" \ + -d '{ "configuredLevel": "TRACE" }' \ + http://localhost:8080/loggers/io.micronaut.configuration.security +``` + + +Configure AKHQ groups and Ldap groups and users +```yaml +akhq: + security: + groups: + - name: topic-reader # Group name + roles: # roles for the group + - topic/read + attributes: + # Regexp to filter topic available for group + topics-filter-regexp: "test\\.reader.*" + connects-filter-regexp: "^test.*$" + - name: topic-writer # Group name + roles: + - topic/read + - topic/insert + - topic/delete + - topic/config/update + attributes: + topics-filter-regexp: "test.*" + connects-filter-regexp: "^test.*$" + ldap: + groups: + - name: mathematicians + groups: + - topic-reader + - name: scientists + groups: + - topic-reader + - topic-writer + users: + - username: franz + groups: + - topic-reader + - topic-writer + +``` + +### OIDC +To enable OIDC in the application, you'll first have to enable OIDC in micronaut: + +```yaml +micronaut: + security: + oauth2: + enabled: true + clients: + google: + client-id: "" + client-secret: "" + openid: + issuer: "" +``` + +To further tell AKHQ to display OIDC options on the login page and customize claim mapping, configure OIDC in the AKHQ config: + +```yaml +akhq: + security: + oidc: + enabled: true + providers: + google: + label: "Login with Google" + username-field: preferred_username + groups-field: roles + default-group: topic-reader + groups: + - name: mathematicians + groups: + - topic-reader + - name: scientists + groups: + - topic-reader + - topic-writer + users: + - username: franz + groups: + - topic-reader + - topic-writer +``` + +The username field can be any string field, the roles field has to be a JSON array. + +### Server +* `micronaut.server.context-path`: if behind a reverse proxy, path to akhq with trailing slash (optional). Example: + akhq is behind a reverse proxy with url , set base-path: "/akhq/". Not needed if you're + behind a reverse proxy with subdomain + +### Kafka admin / producer / consumer default properties +* `akhq.clients-defaults.{{admin|producer|consumer}}.properties`: default configuration for admin producer or + consumer. All properties from [Kafka documentation](https://kafka.apache.org/documentation/) is available. + +### Micronaut configuration +> Since AKHQ is based on [Micronaut](https://micronaut.io/), you can customize configurations (server port, ssl, ...) with [Micronaut configuration](https://docs.micronaut.io/snapshot/guide/configurationreference.html#io.micronaut.http.server.HttpServerConfiguration). +> More information can be found on [Micronaut documentation](https://docs.micronaut.io/snapshot/guide/index.html#config) + +### Docker +AKHQ docker image support 3 environment variables to handle configuraiton : +* `AKHQ_CONFIGURATION`: a string that contains the full configuration in yml that will be written on + /app/configuration.yml on container. +* `MICRONAUT_APPLICATION_JSON`: a string that contains the full configuration in JSON format +* `MICRONAUT_CONFIG_FILES`: a path to to a configuration file on container. Default path is `/app/application.yml` + +#### How to mount configuration file + +Take care when you mount configuration files to not remove akhq files located on /app. +You need to explicitely mount the `/app/application.yml` and not mount the `/app` directory. +This will remove the AKHQ binnaries and give you this error: ` +/usr/local/bin/docker-entrypoint.sh: 9: exec: ./akhq: not found` + +```yaml +volumeMounts: +- mountPath: /app/application.yml + subPath: application.yml + name: config + readOnly: true + +``` + +## Api +An **experimental** api is available that allow you to fetch all the exposed on AKHQ through api. + +Take care that this api is **experimental** and **will** change in a future release. +Some endpoint expose too many datas and is slow to fetch, and we will remove +some properties in a future in order to be fast. + +Example: List topic endpoint expose log dir, consumer groups, offsets. Fetching all of theses +is slow for now and we will remove these in a future. + +You can discover the api endpoint here : +* `/api`: a [RapiDoc](https://mrin9.github.io/RapiDoc/) webpage that document all the endpoints. +* `/swagger/akhq.yml`: a full [OpenApi](https://www.openapis.org/) specifications files + +## Monitoring endpoint +Several monitoring endpoint is enabled by default. You can disabled it or restrict access only for authenticated users +following micronaut configuration below. + +* `/info` [Info Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#infoEndpoint) with git status + informations. +* `/health` [Health Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#healthEndpoint) +* `/loggers` [Loggers Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#loggersEndpoint) +* `/metrics` [Metrics Endpoint](https://docs.micronaut.io/snapshot/guide/index.html#metricsEndpoint) +* `/prometheus` [Prometheus Endpoint](https://micronaut-projects.github.io/micronaut-micrometer/latest/guide/) + +## Debugging AKHQ performance issues + +You can debug all query duration from AKHQ with this commands +```bash +curl -i -X POST -H "Content-Type: application/json" \ + -d '{ "configuredLevel": "TRACE" }' \ + http://localhost:8080/loggers/org.akhq +``` + +## Development Environment + +### Early dev image + +You can have access to last feature / bug fix with docker dev image automatically build on tag `dev` +```bash +docker pull tchiotludo/akhq:dev +``` + +The dev jar is not publish on GitHub, you have 2 solutions to have the `dev` jar : + +Get it from docker image +```bash +docker pull tchiotludo/akhq:dev +docker run --rm --name=akhq -it tchiotludo/akhq:dev +docker cp akhq:/app/akhq.jar . +``` +Or build it with a `./gradlew shadowJar`, the jar will be located here `build/libs/akhq-*.jar` + + +### Development Server + +A docker-compose is provide to start a development environnement. +Just install docker & docker-compose, clone the repository and issue a simple `docker-compose -f docker-compose-dev.yml up` to start a dev server. +Dev server is a java server & webpack-dev-server with live reload. + +The configuration for the dev server is in `application.dev.yml`. + +## Schema references + +Since Confluent 5.5.0, Avro schemas can now be reused by others schemas through schema references. This feature allows to define a schema once and use it as a record type inside one or more schemas. + +When registering new Avro schemas with AKHQ UI, it is now possible to pass a slightly more complex object with a `schema` and a `references` field. + +To register a new schema without references, no need to change anything: + +```json +{ + "name": "Schema1", + "namespace": "org.akhq", + "type": "record", + "fields": [ + { + "name": "description", + "type": "string" + } + ] +} +``` + +To register a new schema with a reference to an already registered schema: + +```json +{ + "schema": { + "name": "Schema2", + "namespace": "org.akhq", + "type": "record", + "fields": [ + { + "name": "name", + "type": "string" + }, + { + "name": "schema1", + "type": "Schema1" + } + ] + }, + "references": [ + { + "name": "Schema1", + "subject": "SCHEMA_1", + "version": 1 + } + ] +} +```` + +Documentation on Confluent 5.5 and schema references can be found [here](https://docs.confluent.io/5.5.0/schema-registry/serdes-develop/index.html). + + +## Who's using AKHQ +* [Adeo](https://www.adeo.com/) +* [Auchan Retail](https://www.auchan-retail.com/) +* [Bell](https://www.bell.ca) +* [BMW Group](https://www.bmwgroup.com) +* [Boulanger](https://www.boulanger.com/) +* [GetYourGuide](https://www.getyourguide.com) +* [Klarna](https://www.klarna.com) +* [La Redoute](https://laredoute.io/) +* [Leroy Merlin](https://www.leroymerlin.fr/) +* [NEXT Technologies](https://www.nextapp.co/) +* [Nuxeo](https://www.nuxeo.com/) +* [Pipedrive](https://www.pipedrive.com) +* [BARMER](https://www.barmer.de/) +* [TVG](https://www.tvg.com) + + +## Credits + +Many thanks to: + +* [JetBrains](https://www.jetbrains.com/?from=AKHQ) for their free OpenSource license. +* Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation. AKHQ is not affiliated with, endorsed by, or otherwise associated with the Apache Software. + +[![Jetbrains](https://user-images.githubusercontent.com/2064609/55432917-6df7fc00-5594-11e9-90c4-5133fbb6d4da.png)](https://www.jetbrains.com/?from=AKHQ) + + +## License +Apache 2.0 © [tchiotludo](https://github.com/tchiotludo) diff --git a/config/application.yml b/config/application.yml new file mode 100644 index 0000000000000000000000000000000000000000..943710e3991bd0ae35b16126ac27cba7d590e139 --- /dev/null +++ b/config/application.yml @@ -0,0 +1,253 @@ +micronaut: + security: + enabled: true + # Ldap authentificaton configuration + ldap: + default: + enabled: true + context: + server: 'ldap://ldap.forumsys.com:389' + managerDn: 'cn=read-only-admin,dc=example,dc=com' + managerPassword: + search: + base: "dc=example,dc=com" + groups: + enabled: true + base: "dc=example,dc=com" + # OIDC authentification configuration + oauth2: + enabled: true + clients: + oidc: + client-id: "" + client-secret: "" + openid: + issuer: "" + token: + jwt: + signatures: + secret: + generator: + secret: + + server: + context-path: "" # if behind a reverse proxy, path to akhq without trailing slash (optional). Example: akhq is + # behind a reverse proxy with url http://my-server/akhq, set base-path: "/akhq". + # Not needed if you're behind a reverse proxy with subdomain http://akhq.my-server/ +akhq: + server: + access-log: # Access log configuration (optional) + enabled: true # true by default + name: org.akhq.log.access # Logger name + format: "[Date: {}] [Duration: {} ms] [Url: {} {}] [Status: {}] [Ip: {}] [User: {}]" # Logger format + + # default kafka properties for each clients, available for admin / producer / consumer (optional) + clients-defaults: + consumer: + properties: + isolation.level: read_committed + + # list of kafka cluster available for akhq + connections: + my-cluster-plain-text: # url friendly name for the cluster (letter, number, _, -, ... dot are not allowed here) + properties: # standard kafka properties (optional) + bootstrap.servers: "kafka:9092" + schema-registry: + url: "http://schema-registry:8085" # schema registry url (optional) + type: "confluent" # schema registry type (optional). Supported types are "confluent" (default) or "tibco" + # Basic Auth user / pass + basic-auth-username: ${UN} + basic-auth-password: ${PW} + properties: # standard kafka properties (optional) + ssl.protocol: TLS + connect: + - name: connect-1 + url: "http://connect:8083" + # Basic Auth user / pass (optional) + basic-auth-username: ${UN} + basic-auth-password: ${PW} + # ssl store configuration (optional) + ssl-trust-store: /app/truststore.jks + ssl-trust-store-password: ${PW} + ssl-key-store: /app/truststore.jks + ssl-key-store-password: ${PW} + - name: connect-2 + url: "http://connect:8084" + # Basic Auth user / pass (optional) + basic-auth-username: ${UN} + basic-auth-password: ${PW} + # ssl store configuration (optional) + ssl-trust-store: /app/truststore.jks + ssl-trust-store-password: ${PW} + ssl-key-store: /app/truststore.jks + ssl-key-store-password: ${PW} + deserialization: + protobuf: + # (optional) if descriptor-file properties are used + descriptors-folder: "/app/protobuf_desc" + topics-mapping: + - topic-regex: "album.*" + descriptor-file-base64: "" #Base64 + value-message-type: "Album" + - topic-regex: "film.*" + descriptor-file-base64: "" #Base64 + value-message-type: "Film" + - topic-regex: "test.*" + descriptor-file: "other.desc" + key-message-type: "Row" + value-message-type: "Envelope" + # Ui Cluster Options (optional) + ui-options: + topic: + default-view: ALL # default list view (ALL, HIDE_INTERNAL, HIDE_INTERNAL_STREAM, HIDE_STREAM). Overrides default + skip-consumer-groups: false # Skip loading consumer group information when showing topics. Overrides default + skip-last-record: true # Skip loading last record date information when showing topics. Overrides default + topic-data: + sort: NEWEST # default sort order (OLDEST, NEWEST) (default: OLDEST). Overrides default + + my-cluster-ssl: + properties: + bootstrap.servers: "kafka:9093" + security.protocol: SSL + ssl.truststore.location: /app/truststore.jks + ssl.truststore.password: ${PW} + ssl.keystore.location: /app/keystore.jks + ssl.keystore.password: ${PW} + ssl.key.password: ${PW} + + my-cluster-sasl: + properties: + bootstrap.servers: "kafka:9094" + security.protocol: SASL_SSL + sasl.mechanism: SCRAM-SHA-256 + sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${UN}" password="${PW}"; + ssl.truststore.location: /app/truststore.jks + ssl.truststore.password: ${PW} + ssl.keystore.location: /app/keystore.jks + ssl.keystore.password: ${PW} + ssl.key.password: ${PW} + + pagination: + page-size: 25 # number of elements per page (default : 25) + threads: 16 # Number of parallel threads to resolve page + + # Topic list display options (optional) + topic: + retention: 172800000 # default retention period when creating topic + partition: 3 # default number of partition when creating topic + replication: 3 # default number of replicas when creating topic + internal-regexps: # list of regexp to be considered as internal (internal topic can't be deleted or updated) + - "^_.*$" + - "^.*_schemas$" + - "^.*connect-config$" + - "^.*connect-offsets$1" + - "^.*connect-status$" + stream-regexps: # list of regexp to be considered as internal stream topic + - "^.*-changelog$" + - "^.*-repartition$" + - "^.*-rekey$" + skip-consumer-groups: false # Skip loading consumer group information when showing topics + skip-last-record: false # Skip loading last record date information when showing topics + + # Topic display data options (optional) + topic-data: + size: 50 # max record per page (default: 50) + poll-timeout: 1000 # The time, in milliseconds, spent waiting in poll if data is not available in the buffer. + + # Ui Global Options (optional) + ui-options: + topic: + default-view: ALL # default list view (ALL, HIDE_INTERNAL, HIDE_INTERNAL_STREAM, HIDE_STREAM). Overrides default + skip-consumer-groups: false # Skip loading consumer group information when showing topics. Overrides default + skip-last-record: true # Skip loading last record date information when showing topics. Overrides default + topic-data: + sort: NEWEST # default sort order (OLDEST, NEWEST) (default: OLDEST). Overrides default + + # Auth & Roles (optional) + security: + default-group: admin # Default groups for all the user even unlogged user + # Groups definition + groups: + admin: # unique key + name: admin # Group name + roles: # roles for the group + - topic/read + - topic/insert + - topic/delete + - topic/config/update + - node/read + - node/config/update + - topic/data/read + - topic/data/insert + - topic/data/delete + - group/read + - group/delete + - group/offsets/update + - registry/read + - registry/insert + - registry/update + - registry/delete + - registry/version/delete + - acls/read + - connect/read + - connect/insert + - connect/update + - connect/delete + - connect/state/update + attributes: + # Regexp to filter topic available for group + topics-filter-regexp: "test.*" + # Regexp to filter connect configs visible for group + connects-filter-regexp: "^test.*$" + # Regexp to filter consumer groups visible for group + consumer-groups-filter-regexp: "consumer.*" + topic-reader: # unique key + name: topic-reader # Other group + roles: + - topic/read + attributes: + topics-filter-regexp: "test\\.reader.*" + + # Basic auth configuration + basic-auth: + - username: ${UN} # Username + password: ${PW} # Password in sha256 + groups: # Groups for the user + - admin + - topic-reader + + # Ldap Groups configuration (when using ldap) + ldap: + default-group: topic-reader + groups: + - name: group-ldap-1 + groups: # Akhq groups list + - topic-reader + - name: group-ldap-2 + groups: + - admin + users: + - username: ${UN} # ldap user id + groups: # Akhq groups list + - topic-reader + - username: ${UN} + groups: + - admin + + # OIDC configuration + oidc: + enabled: true + providers: + oidc: + label: "Login with OIDC" + username-field: ${UN} + groups-field: roles + default-group: topic-reader + groups: + - name: oidc-admin-group + groups: + - admin + users: + - username: ${UN} + groups: + - admin diff --git a/hardening_manifest.yaml b/hardening_manifest.yaml new file mode 100644 index 0000000000000000000000000000000000000000..063fcd21df02de7a74ed1a425f72173593eecf41 --- /dev/null +++ b/hardening_manifest.yaml @@ -0,0 +1,48 @@ +--- +apiVersion: v1 + +# The repository name in registry1, excluding /ironbank/ +name: "opensource/tchiotludo/akhq" + +# List of tags to push for the repository in registry1 +# The most specific version should be the first tag and will be shown +# on ironbank.dsop.io +tags: +- "0.18.0" +- "latest" + +# Build args passed to Dockerfile ARGs +args: + BASE_IMAGE: "redhat/openjdk/openjdk11" + BASE_TAG: "1.11" + +# Docker image labels +labels: + org.opencontainers.image.title: "akhq" + ## Human-readable description of the software packaged in the image + org.opencontainers.image.description: "AKHQ was previously Kafka GUI for Apache Kafka to manage topics, topics data, consumers group and schema registry" + ## License(s) under which contained software is distributed + org.opencontainers.image.licenses: "Apache License 2.0" + ## URL to find more information on the image + org.opencontainers.image.url: "https://hub.docker.com/r/tchiotludo/akhq" + ## Name of the distributing entity, organization or individual + org.opencontainers.image.vendor: "opensource" + org.opencontainers.image.version: "0.18.0" + ## Keywords to help with search (ex. "cicd,gitops,golang") + mil.dso.ironbank.image.keywords: "kafka-dashboard,dataflow,processing,akhq" + ## This value can be "opensource" or "commercial" + mil.dso.ironbank.image.type: "opensource" + ## Product the image belongs to for grouping multiple images + mil.dso.ironbank.product.name: "akhq" + +# List of resources to make available to the offline build context +resources: +- tag: tchiotludo/akhq:0.18.0 + url: docker://docker.io/tchiotludo/akhq@sha256:c977e6dfe9d3a290eb16530e3b8ad0c251076a5997cb9867b635d1727799cbf5 + +# List of project maintainers +maintainers: +- name: "Jacob Rohlman" + username: "jacob.rohlman" + email: "jacob.rohlman@us.af.mil" + cht_member: true diff --git a/renovate.json b/renovate.json new file mode 100644 index 0000000000000000000000000000000000000000..6bc1eb2ead18b1d5749a940d488ac3c9c23aaa02 --- /dev/null +++ b/renovate.json @@ -0,0 +1,33 @@ +{ + "assignees": [ + "@jacob.rohlman" + ], + "baseBranches": [ + "development" + ], + "automerge": true, + "gitLabAutomerge": true, + "regexManagers": [ + { + "fileMatch": [ + "^hardening_manifest.yaml$" + ], + "matchStrings": [ + "org\\.opencontainers\\.image\\.version:\\s+\"(?.+?)\"" + ], + "depNameTemplate": "tchiotludo/akhq", + "datasourceTemplate": "docker" + }, + { + "fileMatch": [ + "^hardening_manifest.yaml$" + ], + "matchStrings": [ + "tags:\\s+-\\s+\"(?.+?)\"" + ], + "depNameTemplate": "tchiotludo/akhq", + "datasourceTemplate": "docker" + } + ] +} + diff --git a/scripts/akhq b/scripts/akhq new file mode 100644 index 0000000000000000000000000000000000000000..5930e6b02b40ed46e3df160a65866683a1843d8d --- /dev/null +++ b/scripts/akhq @@ -0,0 +1,10 @@ +#!/usr/bin/env sh + +# Read user-defined JVM options from jvm.options file +JVM_OPTS_FILE=${JVM_OPTS_FILE:-/app/jvm.options} +for JVM_OPT in `grep "^-" ${JVM_OPTS_FILE}` +do + JAVA_OPTS="${JAVA_OPTS} ${JVM_OPT}" +done + +java ${JAVA_OPTS} -cp /app/akhq.jar:${CLASSPATH} org.akhq.App \ No newline at end of file diff --git a/scripts/docker-entrypoint.sh b/scripts/docker-entrypoint.sh new file mode 100644 index 0000000000000000000000000000000000000000..ef7ff08540cfe4e93a9e429e093c94e6391ac27e --- /dev/null +++ b/scripts/docker-entrypoint.sh @@ -0,0 +1,9 @@ + #!/usr/bin/env sh + +set -e + +if [ "${AKHQ_CONFIGURATION}" ]; then + echo "${AKHQ_CONFIGURATION}" > /app/application.yml +fi + +exec "$@" \ No newline at end of file