[ウェビナー] Confluent と AWS を使用して生成 AI スタックを構築 | 今すぐ登録

Ansible Playbooks for Confluent Platform

作成者 :

Here at Confluent, we want to see each Confluent Platform installation be successful right from the start. That means going beyond beautiful APIs and spectacular stream processing features. We want to give engineers running the services a great experience, too, which is why I’m pleased to introduce our first set of Ansible playbooks to help set up and install Confluent Platform. These playbooks help you get a proof-of-concept cluster off the ground that is production ready. To get started, just put your hostnames into the example hosts.yml file and run the full playbook with a simple `ansible-playbook -i hosts.yml all.yml`. The result is a secured Confluent Platform in less than five minutes.

What makes these playbooks special? First, they deploy Confluent Platform services with the newly packaged systemd service unit files, giving administrators the familiar look and feel of a production system. We’ve also made the hard decisions for you by taking the liberties of setting well-documented default usernames and log locations to help installations of new environments be repeatable and recognizable. The systemd integration even provides a quite-lovely integration with journalctl for logging.

The second feature is optional automated authentication and encryption setup. The supported modes are PLAINTEXT, SSL, and SASL_SSL. We often hear from users about how they want to get started quickly on a proof-of-concept to show the business value of a streaming data platform, but struggle to get past the first hurdle of enabling SASL and SSL. This setup generates simple SSL certificates and uses SASL authentication with a plaintext mechanism by  choosing the `sasl_ssl` template. The automated setup is designed to show the security capabilities of the platform simply and quickly. The best part is that when it’s time to go to full production, you can replace the provided self-signed SSL certificates and plaintext SASL with your own production grade and Kerberos principals without even having to worry if you missed a configuration.

Finally, the setup isn’t limited to deploying some Confluent Platform services and leaving others as an exercise to the user. You can easily launch every component: Apache ZooKeeper, Kafka brokers, Confluent Schema Registry, Confluent REST Proxy, Kafka Connect workers, KSQL server, and Confluent Control Center. If you launch the whole platform, you also get the awesome topic and connector management features available in Confluent Control Center 4.1, and can get data flowing directly from the UI. In fact, you can check out Robin’s blog post for some great ideas on what to get connected to right away.

The bottom line is, at Confluent, we want each of our users to be successful. Whether they are just getting started on a laptop, setting up their first proof-of-concept to show the business, or getting ready to launch their 15th production installation of the platform. We think that these playbooks are a great way to get those proof-of-concept installs off the ground in a way that follows best practices directly from our team.

The repository is open! We would love to hear your feedback and welcome pull requests.

Interested in more?

If you’d like to know more, here are some resources for you:

 

  • Dustin Cote has spent over four years helping customers wrangle their data and infrastructure using a variety of open source technologies ranging from Hadoop to Kafka. At Confluent, he helps customers stabilize and scale their deployments of Apache Kafka.

このブログ記事は気に入りましたか?今すぐ共有