DevProd team update for week of February 1
The DevProd team is launching a blog with weekly updates on migrating apps to Clowder and ephemeral environments! This will include brief descriptions of changes and new features added to both Clowder and Bonfire, progress updates regarding specific app migrations, issues and warnings to be pertinent to dev teams, and other developments going on in the DevProd team.
Clowder
Feature flags#
Clowder now supports
passing connection information for an Unleash feature flag
server in cdappconfig.json
. The feature only supports local
mode at this
time, though app-interface
mode will be added once devs need a feature flag
server in production. In local
mode, Clowder will deploy an Unleash instance
and provide the hostname and port to the application.
Apps will need to add featureFlags: true
to their app spec to get the
credentials. Also make sure that your ClowdEnvironment
has the featureFlag
provider have mode: local
to get the Unleash instance deployed.
Shared DBs#
While the DevProd team recommends apps do not share DBs, some apps cannot
easily re-architect to meet this recommendation. Thus we have added the
ability for ClowdApps
to share databases
by using the sharedDbAppName
attribute in the spec:
metadata:
name: advisor-service
spec:
database:
sharedDbAppName: advisor-api
Private ports#
Since many apps expose a separate endpoint for internal APIs, Clowder needed to provide support for exposing this port. This change touched several Clowder components:
-
Created a new
webServices
section in theClowdApp
spec, ex:spec: webServices: public: enabled: true private: enabled: true
-
Deprecated the
web
section in theClowdApp
spec in favor of thewebServices
section. -
ClowdEnvironment
now has aspec.providers.web.privatePort
field to specify the port number used for private endpoints. -
Added
publicPort
andprivatePort
incdappconfig.json
in favor ofwebPort
. -
Consequently
webPort
is now considered deprecated and apps should usepublicPort
instead. -
Released new versions of the Python, Go, JS app-common libs to support the
cdappconfig.json
changes.
Ruby app-common client#
Work for the app-common client for Ruby has begun. Thanks to Hui Song and Madhu Kanoor for driving this effort!
https://github.com/RedHatInsights/app-common-ruby
Redesigned README#
The Clowder README has been refined in the last few weeks to better provide what devs look for in a README: What is this thing, why do I care, and how do I use it?
- New images to help explain and demonstrate what Clowder does
- Github releases will contain a manifest that will install Clowder in a cluster
- Revised explanation of what Clowder is
Bonfire
Deploy frontend via Bonfire#
The ability to deploy the Clouddot frontend to an ephemeral environment via Bonfire is nearly complete. Details on how to deploy the frontend will be in the next update.
Local config#
Bonfire now allows the use of a local YAML document to provide template
configuration as an alternative to app-interface. This should reduce the
learning curve required to get started using Bonfire for devs trying to get
started with ephemeral environments. Instead of having to make local changes
to app-interface and run a GraphQL server, all a dev needs to do is update
config.yaml
to configure their app.
Bonfire has an example_config.yaml
that can be used as an example for devs to
use. Copy example_config.yaml
to config.yaml
and run
bonfire config get -l -a <app>
to have bonfire produce a K8S resource that can be piped to oc apply -f -
.
Migrations
While we continue to focus on deploying host-inventory and RBAC to stage and prod via Clowder, these apps are already available for use in ephemeral environments, along with these apps:
- Ingress
- Puptoo
- Storage broker
- Engine (insights-core)
- Remediations
- Receptor
Work also continues on Advisor, but this is mostly focused on integrating the Cyndi operator with Clowder.
Documentation
The DevProd team is starting a focused effort to revamp our documentation on migration to the AppSRE build pipeline and adding support for Clowder & ephemeral environments to your apps. We plan to have updated documentation in the coming weeks!
In the mean time, please check out our updated readme and API Reference .
V3 Cluster
There has been an ongoing issue with the V3 cluster where nodes would be periodically marked with a “taint” that the storage of that node was in a bad state. When this happens, the entire node is marked as unavailable for scheduling pods. If this issue is not resolved before too many nodes get tainted, then the cluster starts reporting resource exhaustion, causing CI/CD pipelines to freeze and CI/QA environments to stop functioning.
While the root cause has never been identified, a workaround has now been automated: A node will be drained of pods and rebooted when a taint is first detected. Previously the DevProd team has had to submit a ticket to OSD to get a node rebooted, so this should significantly improve the stability of the V3 cluster.