MinusNowDocumentation
Complete Reference

Comprehensive Module Guide

Step-by-step how-to documentation for every MinusNow module — from CMDB discovery to AI-driven auto-healing and forensic analysis.

All Modules

CI

CMDB & Inventory

Configuration Management Database — automatic asset discovery, relationship mapping, and lifecycle tracking.

Overview

The CMDB module provides a single source of truth for all IT assets (Configuration Items). It automatically discovers servers, network devices, applications, and cloud resources, then maps dependencies for impact analysis.

How to Set Up CMDB

  1. Deploy Agents on Target ServersInstall the MinusNow agent on each server to be inventoried. See Agent Installation Guide. Agents auto-discover hardware, OS, software, services, and network interfaces.
  2. Configure Discovery SourcesNavigate to Admin → CMDB → Discovery Sources. Add network CIDR ranges for agentless discovery (SNMP, WMI, SSH). Optionally integrate vCenter, AWS, Azure, or GCP for cloud asset import.
  3. Run Initial Discovery ScanClick Run Discovery or wait for the scheduled scan. First scan typically takes 5-15 minutes depending on network size. Results appear in the All Assets view.
  4. Define CI Types and CategoriesGo to Admin → CMDB → CI Types. Default types include Server, Network Device, Application, Database, Cloud Instance. Add custom types and attributes as needed.
  5. Map RelationshipsRelationships are auto-discovered (server → application, application → database). Manually add business service mappings via the Relationship Editor. Use the topology map to visualize dependencies.
  6. Set Lifecycle PoliciesConfigure asset lifecycle stages: Planned → Active → Maintenance → Retired. Set alerts for warranty expiration, EOL, and compliance drift.

Key Features

Auto-Discovery

Agent-based and agentless discovery via SNMP, WMI, SSH, and cloud APIs. Discovers hardware, OS, installed software, running services, open ports, and network topology.

Relationship Mapping

Automatic dependency mapping between CIs. Visualize impact radius for any asset. Supports parent-child, runs-on, depends-on, and connected-to relationship types.

Change Tracking

Every CI change is versioned and audited. Compare configurations over time. Detect configuration drift and unauthorized changes with diff view.

Cloud Integration

Native sync with AWS (EC2, RDS, Lambda), Azure (VMs, App Service, AKS), and GCP (Compute, Cloud SQL, GKE). Auto-tags and classifies cloud resources.

Data Collected per CI

CategoryData Points
HardwareCPU model/cores, RAM, disk (type/size/usage), NIC, serial number, BIOS version
Operating SystemOS name, version, kernel, architecture, uptime, last patch date
SoftwareInstalled packages (name, version, source), auto-update status
ServicesRunning services, state, startup type, dependencies, listening ports
NetworkIP addresses, MAC, gateway, DNS, FQDN, VLAN, open ports
CloudInstance ID, region, VPC, security groups, tags, cost allocation
MA

Monitoring & Alerts

Real-time infrastructure monitoring with correlated alerting, dashboards, and anomaly detection.

How to Set Up Monitoring

  1. Install Agents on Monitored HostsAgents begin collecting metrics immediately after installation: CPU, RAM, disk I/O, network throughput, process counts, service states. No additional configuration needed for baseline monitoring.
  2. Configure Monitoring ProfilesNavigate to Monitoring → Profiles. Create profiles per host type (web server, database server, etc.). Define which metrics to collect and at what interval (15s-5min).
  3. Define Alert RulesGo to Monitoring → Alert Rules. Create threshold-based rules (e.g., CPU > 90% for 5 min) or anomaly-based rules (AI detects deviation from baseline). Set severity: Info, Warning, Critical.
  4. Set Up Notification ChannelsConfigure notification targets: Email, Slack, Microsoft Teams, PagerDuty, webhooks, SMS. Assign channels to alert rules with escalation policies.
  5. Build DashboardsUse the Dashboard Builder to create custom views. Drag and drop: time-series graphs, heatmaps, gauges, topology maps, top-N tables. Share dashboards with teams or embed in portals.
  6. Enable SNMP / Syslog IngestionFor network devices, enable SNMP trap receiver on port 162 and syslog on port 514. Configure SNMP community strings and OID mappings in Monitoring → Integrations.

Metrics Collected

CategoryMetricsDefault Interval
CPUTotal %, per-core %, load average, steal %, iowait30s
MemoryUsed %, available, swap usage, cache, buffers30s
DiskUsage %, IOPS, read/write throughput, latency, inodes60s
NetworkTX/RX bytes, packets, errors, drops, connection count30s
ProcessCount, top consumers, zombie count, file descriptors60s
ServiceState (up/down), response time, port availability30s
ApplicationHTTP status, response time, error rate, custom metrics60s
Alert Correlation

The AI engine correlates related alerts (e.g., a disk full alert → service crash → application error) into a single incident, reducing alert fatigue by up to 80%. Correlation rules use topology, timing, and historical pattern matching.

IM

Incident Management

ITIL-aligned incident lifecycle from detection to resolution with AI-assisted triage and escalation.

Incident Lifecycle

Detection Logging Classification Priority Assignment Investigation Resolution Closure

How to Use Incident Management

  1. Create an IncidentIncidents can be created manually via the portal, automatically from alerts, via email, or through the API. The AI auto-classifies the category and suggests priority based on impact and urgency.
  2. Automatic TriageAI analyzes the incident description, affected CIs, and historical patterns. It automatically categorizes, assigns to the right team, and attaches relevant knowledge articles and past incidents.
  3. Work the IncidentAssigned engineers update the work log, attach artifacts, run diagnostic commands via remote execution, or trigger auto-healing runbooks. Communication is tracked in the activity timeline.
  4. Escalate if NeededSLA timers track response and resolution targets. Auto-escalation triggers when SLA is at risk. Escalation can be functional (to specialist team) or hierarchical (to management).
  5. Resolve and CloseDocument the resolution, select root cause category, and close the incident. The system prompts to create a knowledge article if the resolution is novel. Post-incident review link is auto-generated for P1/P2 incidents.

Priority Matrix

High UrgencyMedium UrgencyLow Urgency
High ImpactP1 — Critical
Response: 15 min
Resolve: 4 hrs
P2 — High
Response: 30 min
Resolve: 8 hrs
P3 — Medium
Response: 2 hrs
Resolve: 24 hrs
Medium ImpactP2 — High
Response: 30 min
Resolve: 8 hrs
P3 — Medium
Response: 2 hrs
Resolve: 24 hrs
P4 — Low
Response: 8 hrs
Resolve: 72 hrs
Low ImpactP3 — Medium
Response: 2 hrs
Resolve: 24 hrs
P4 — Low
Response: 8 hrs
Resolve: 72 hrs
P5 — Planning
Response: 24 hrs
Resolve: 1 week
CH

Change Management

Risk-controlled change lifecycle with approval workflows, CAB reviews, and automated rollback.

Change Lifecycle

Request Risk Assessment Approval Scheduling Implementation Review Closure

How to Use Change Management

  1. Submit a Change Request (RFC)Navigate to Changes → New Request. Fill in: summary, description, affected CIs (from CMDB), implementation plan, rollback plan, risk level, and requested schedule. Attach supporting documentation.
  2. AI Risk AssessmentThe system automatically calculates risk score based on: affected CI criticality, change complexity, historical failure rate for similar changes, timing (business hours vs maintenance window), and number of CIs impacted.
  3. Approval WorkflowChanges route through configurable approval chains. Standard changes: auto-approved. Normal changes: require manager + tech lead approval. Emergency changes: fast-track with post-approval CAB review.
  4. Schedule and ImplementApproved changes are scheduled in the change calendar. Blackout periods prevent changes during critical business times. Implementation tasks can trigger automated deployment pipelines.
  5. Post-Implementation Review (PIR)After implementation, the system tracks success/failure metrics. Failed changes trigger automatic rollback if configured. All changes are reviewed in the next CAB meeting.

Change Types

TypeRiskApprovalLead TimeExample
StandardLowPre-approvedNonePassword reset, user onboarding, pre-tested patch
NormalMedium1-2 approvers3-5 daysServer upgrade, network config change, app deployment
MajorHighCAB review7-14 daysInfrastructure migration, architecture change
EmergencyVariesFast-trackImmediateSecurity patch for active exploit, hotfix for outage
PB

Problem Management

Root-cause investigation, known-error tracking, and proactive problem identification.

How to Use Problem Management

  1. Identify ProblemsProblems are identified reactively (from recurring incidents) or proactively (from trend analysis). The AI flags when 3+ incidents share common CIs, categories, or symptoms.
  2. Create a Problem RecordLink related incidents to the problem record. Document symptoms, affected CIs, and initial hypothesis. This prevents duplicate investigation effort.
  3. Root Cause InvestigationUse the RCA toolkit (timeline correlation, log analysis, change history). Document investigation steps, hypotheses tested, and evidence gathered.
  4. Implement Workaround or FixIf a permanent fix requires a change, link the problem to a change request. If only a workaround is available, document it as a Known Error and attach it to a knowledge article for frontline teams.
  5. Close and ReviewClose the problem once the root cause is resolved and verified. Update all linked incidents. The AI uses this data to improve future auto-resolution accuracy.
SR

Service Requests

Self-service catalog for users to request IT services with automated fulfillment.

How to Use Service Requests

  1. Browse the Service CatalogUsers access the self-service portal and browse available services organized by category: Access & Accounts, Hardware, Software, Network, Cloud Resources, Reports, etc.
  2. Submit a RequestSelect a catalog item, fill in the dynamic form (fields vary by service type), and submit. The system validates inputs, checks entitlements, and routes to the correct fulfillment team.
  3. Automated ApprovalSimple requests (e.g., software install from approved list) are auto-approved. Complex requests (e.g., new cloud environment, admin access) route through the approval workflow with manager sign-off.
  4. FulfillmentApproved requests trigger automated fulfillment where possible: user provisioning via directory sync, software deployment via agent, cloud resource provisioning via API. Manual tasks create work orders for IT staff.
  5. Track and CloseUsers can track request status in the portal. Notifications are sent at each stage. Once fulfilled, the request is marked complete and user satisfaction survey is triggered.

Popular Catalog Items

CategoryItemsFulfillment
AccessNew user account, password reset, group membership, VPN accessAutomated via Directory Sync
HardwareLaptop request, monitor, peripheral, mobile deviceManual (asset assignment)
SoftwareApp install, license request, dev environmentAutomated via Agent
CloudVM provisioning, storage, database instance, Kubernetes namespaceAutomated via Cloud API
NetworkFirewall rule, DNS entry, load balancer configSemi-automated
AH

Auto-Healing

Automated incident remediation through runbooks, scripts, and AI-driven recovery actions.

How Auto-Healing Works

Alert Triggered Pattern Match Runbook Selected Permission Check Execute Action Verify Recovery Close / Escalate

How to Set Up Auto-Healing

  1. Ensure Agent Has Sudo AccessAuto-healing requires the agent to execute remediation commands. The mnow-agent user must have scoped sudo privileges. See User & Permissions for the exact sudoers configuration.
  2. Create Healing RunbooksNavigate to Automation → Runbooks. Create runbooks for common scenarios: service restart, disk cleanup, process kill, log rotation, certificate renewal. Each runbook has a trigger condition, action steps, and verification check.
  3. Map Runbooks to Alert RulesIn Monitoring → Alert Rules, link each alert to its healing runbook. Example: "Disk usage > 90%" → "Disk Cleanup Runbook". Set the healing mode: Automatic (no approval needed) or Approval Required.
  4. Configure Safety GuardrailsSet limits: max auto-healing attempts per CI per hour, cooldown period between actions, exclude critical CIs from automatic remediation, require approval for destructive actions (reboot, data deletion).
  5. Monitor Healing ActivityReview auto-healing activity in Automation → Healing Log. Track success rates, mean time to recovery (MTTR), and actions taken. Failed healing automatically escalates to human operators.

Permission Requirements

Critical: Sudo/Root Access

Auto-healing on Linux requires the mnow-agent user to have sudo NOPASSWD access for specific commands: systemctl, apt/yum/dnf, reboot, rm (scoped), journalctl. On Windows, the agent service must run as Local Administrator or a domain account with admin rights. Without elevated privileges, auto-healing operates in dry-run / recommendation mode only.

Built-In Runbooks

RunbookTriggerActionVerify
Service RestartService down alertsystemctl restart <service>Check port + health endpoint
Disk CleanupDisk usage > 90%Purge old logs, tmp files, journalVerify usage dropped below 85%
Process KillRunaway process (CPU > 95%)Kill process by PID or nameVerify CPU normalized
Log RotationLog file > 1 GBRotate and compress log filesVerify log size reduced
Certificate RenewalCert expiry < 7 daysRun certbot renew, reload serviceVerify cert validity
Memory CleanupMemory > 95%Clear caches, restart memory-leaking serviceVerify memory dropped
CM

Capacity Management

Forecast resource utilization, plan capacity, and prevent saturation before it impacts services.

How to Use Capacity Management

  1. Enable Capacity ScanningCapacity scanning is an extension of monitoring. Navigate to Capacity → Settings and enable capacity collection. The agent collects historical utilization data (CPU, RAM, disk, network) over 30, 60, or 90-day windows.
  2. View Capacity DashboardThe capacity dashboard shows current utilization vs. available capacity for all hosts. Color-coded gauges highlight hosts approaching saturation (Yellow: 70-85%, Red: 85%+).
  3. Run ForecastingThe AI forecasting engine uses historical trends to predict when resources will exhaust. Navigate to Capacity → Forecast. Generate forecasts for individual hosts, clusters, or the entire fleet.
  4. Create Capacity AlertsSet proactive alerts for capacity thresholds (e.g., "Disk will be full in 14 days based on current growth"). These alerts trigger before traditional threshold alerts, giving time to plan.
  5. Right-Sizing RecommendationsThe AI analyzes actual usage vs. allocated resources. It recommends right-sizing: downsize over-provisioned hosts (cost savings) and upsize under-provisioned hosts (prevent outages).

Capacity Metrics

ResourceMetrics TrackedForecast Window
CPUAvg util %, peak util %, core count, thread count30/60/90-day trend
MemoryAvg used %, peak used %, total available30/60/90-day trend
DiskUsed space, growth rate (GB/day), IOPS headroomDays until full
NetworkAvg throughput, peak throughput, bandwidth available30/60/90-day trend
VP

Vulnerability & Patch Management

Scan for vulnerabilities, prioritize risk, orchestrate patching, and track remediation.

How to Set Up Vulnerability Scanning

  1. Deploy Agents on Target HostsAgents collect installed package lists (RPM, DEB, MSI) and match them against the MinusNow vulnerability database (synced from NVD, vendor advisories, and commercial feeds).
  2. Configure Scan ScheduleNavigate to Security → Vulnerability → Scan Config. Set scan frequency: real-time (on package change), daily, or weekly. Agent-based scans have zero network overhead — scanning happens locally on each host.
  3. Review Vulnerability DashboardThe dashboard shows: total vulnerabilities by severity (Critical, High, Medium, Low), trending vulnerability count over time, most affected hosts, and CVSS score distribution.
  4. Risk PrioritizationMinusNow prioritizes based on: CVSS score, exploit availability (EPSS score), asset criticality (from CMDB), network exposure (internet-facing vs. internal), and compensating controls. This surfaces the vulnerabilities that matter most.
  5. Create Patch CampaignsGroup related patches into campaigns. Test on staging hosts first, then roll out to production in waves. The system tracks patch compliance and auto-creates incidents for failed patches.
  6. Deploy Patches via AgentPatches are deployed through the agent using the OS native package manager (apt, yum, dnf, Windows Update). The agent handles reboot scheduling and verification.

Satellite Server for Vulnerability Scanning

Satellite Architecture

In air-gapped or remote environments, a satellite server maintains a local mirror of the vulnerability database. Agents report their package inventory to the satellite, which performs local matching and syncs results to the central server when connectivity is available. See Satellite Requirements.

User Requirements for Vulnerability Module

RolePermissions NeededActions Available
Security AdminFull vulnerability module accessConfigure scans, create campaigns, approve patches, manage exceptions
IT OperatorRead + Patch executionView vulnerabilities, deploy approved patches, verify remediation
AuditorRead-onlyView reports, export compliance data, review scan history
Agent (OS-level)sudo for package managementInstall/update packages, schedule reboots
A2I

Alert-to-Incident Flow

How monitoring alerts become actionable incidents with automatic triage, deduplication, and suppression.

End-to-End Flow

Metric Threshold Breached Alert Generated Deduplication Correlation Incident Created Auto-Heal / Assign

How It Works

  1. Threshold Breach / Anomaly DetectionAn alert is generated when a monitored metric crosses its configured threshold (static or AI-learned baseline). Anomaly detection catches unusual patterns even without explicit thresholds.
  2. Alert DeduplicationIf an identical alert (same host, same metric, same severity) exists within the dedup window (default: 15 min), the new alert is merged into the existing one. Counter increments, no new notifications.
  3. Alert CorrelationThe AI engine correlates related alerts into a single parent alert. Example: multiple service alerts on the same host following a disk-full alert are grouped. This reduces alert noise by 60-80%.
  4. Incident Auto-CreationCorrelated alerts that meet incident criteria (severity ≥ Warning, duration ≥ 5 min) automatically create an incident. The incident includes: alert details, affected CIs from CMDB, blast radius visualization, and suggested actions.
  5. Auto-Healing or Human AssignmentIf a matching auto-healing runbook exists, it executes immediately. If auto-healing resolves the issue, the incident is auto-closed with documentation. If auto-healing fails or no runbook matches, the incident is assigned to the on-call team via round-robin or skill-based routing.
  6. Alert SuppressionDuring planned maintenance windows, alerts from affected CIs are suppressed. Suppressed alerts are logged but don't create incidents or trigger notifications. Maintenance windows are linked to change records.
AA

Automation Actions

Build and deploy automated workflows that respond to events, schedules, or manual triggers.

How to Use Automation Actions

  1. Navigate to Automation BuilderGo to Automation → Actions. The visual builder lets you chain actions using a trigger → condition → action model. No coding required for common scenarios.
  2. Define TriggersTriggers can be: alert fired, incident created, change approved, schedule (cron), CMDB change detected, API webhook received, or manual button click.
  3. Set ConditionsAdd filters to narrow when the action executes: only for specific CI types, severity levels, teams, time windows, or custom field values.
  4. Configure ActionsChoose from built-in actions: run script on host (via agent), send notification, create ticket, call REST API, update CMDB field, trigger deployment pipeline, or run a Python/shell script.
  5. Test and DeployTest the automation with dry-run mode (actions are simulated, not executed). Once verified, deploy to production. Monitor execution in the Automation Log with full audit trail.

Common Automation Templates

TemplateTriggerActions
Auto-Assign by CategoryIncident createdCheck category → assign to correct team → notify via Slack
Disk Space Alert ResponseDisk > 90% alertRun cleanup script → verify → create incident if failed
New Employee OnboardingService request approvedCreate AD account → provision email → assign laptop → notify HR
Change Approval ReminderChange pending 48hrsSend reminder to approver → escalate after 72hrs
SSL Certificate ExpiryCert expiry < 30 daysCreate change request → auto-renew → verify → close
DI

Directory Integration & IAM

Synchronize users, groups, and roles from Active Directory, LDAP, Azure AD, Okta, and other identity providers.

How to Set Up Directory Integration

  1. Configure Directory SourceNavigate to Admin → Directory Integration. Add your identity source: Active Directory (LDAP), Azure AD (SCIM/Graph API), Okta (SCIM), Google Workspace, or any LDAP-compatible directory.
  2. Provide Connection DetailsFor AD/LDAP: host, port (389/636), bind DN, base DN, search filter. For Azure AD: tenant ID, client ID, client secret, SCIM endpoint. For Okta: API token, org URL.
  3. Map AttributesMap directory attributes to MinusNow user fields: display name, email, department, manager, location, job title. Configure group-to-role mappings (e.g., "IT-Admins" AD group → "Admin" MinusNow role).
  4. Configure Sync ScheduleSet sync frequency: real-time (webhooks), every 15 min, hourly, or daily. Delta sync only processes changes since last sync. Full sync re-evaluates all users and groups.
  5. Enable SSOConfigure Single Sign-On: SAML 2.0 or OpenID Connect. Users authenticate against the corporate identity provider and are auto-provisioned in MinusNow on first login (JIT provisioning).
  6. Test and VerifyRun a test sync to verify user and group import. Check the sync log for errors. Verify SSO login with a test user account.

User Sync Requirements

Source System (Active Directory / LDAP)

  • Read AccessService account with read access to user/group OUs
  • AttributessAMAccountName, mail, displayName, memberOf, department, manager, userAccountControl
  • PortLDAPS (636) recommended; LDAP (389) supported with STARTTLS
  • FirewallApp server → AD server on port 636/389

Source System (Azure AD / Entra ID)

  • App RegistrationAzure AD app with User.Read.All, Group.Read.All, Directory.Read.All permissions
  • ProtocolMicrosoft Graph API or SCIM 2.0
  • AuthenticationOAuth 2.0 client credentials grant

Application Server (MinusNow)

  • User ProvisioningCreates/updates MinusNow user records based on directory data
  • Role MappingMaps directory groups to platform roles (Admin, Agent, User, Viewer)
  • DeprovisioningDisabled directory users are auto-deactivated in MinusNow
  • Conflict ResolutionEmail is the unique identifier; duplicates are flagged for manual review

Client-Side (End User Browser)

  • SSO RedirectBrowser must support SAML 2.0 or OIDC redirects (all modern browsers)
  • CookiesThird-party cookies must be allowed for IdP domain
  • Pop-up BlockerMay need exception for IdP consent screens (Azure AD)
KB

Knowledge Base

Create, manage, and auto-surface knowledge articles linked to live operations.

How to Use the Knowledge Base

  1. Create Knowledge ArticlesNavigate to Knowledge → New Article. Use the rich-text editor or Markdown. Structure articles with: Overview, Symptoms, Cause, Resolution Steps, and Related CIs. Tag articles with categories and keywords.
  2. Link to OperationsArticles can be linked to: incident categories (auto-suggested during triage), problem records (known error workarounds), CIs (troubleshooting guides for specific assets), and service catalog items (how-to guides).
  3. Auto-SuggestionWhen agents work an incident, the AI searches the knowledge base and surfaces relevant articles based on: incident description, affected CIs, category, and historical match patterns. This reduces resolution time by 30-50%.
  4. Manage LifecycleArticles have lifecycle stages: Draft → Review → Published → Archived. Set review dates for periodic accuracy checks. Track article usefulness via ratings and usage analytics.
  5. Access ControlControl article visibility by plan tier (Free, Pro, Enterprise). Internal articles (troubleshooting runbooks) can be restricted to IT staff only, while user-facing articles appear in the self-service portal.
RC

RCA & Forensics

Automated root-cause analysis, timeline reconstruction, and exportable forensic reports.

How to Use RCA & Forensics

  1. Trigger RCA AnalysisFor P1/P2 incidents, RCA is automatically triggered upon resolution. For other incidents, click Run RCA from the incident detail page. The AI begins collecting and correlating evidence.
  2. Review TimelineThe RCA engine reconstructs a timeline of events leading up to, during, and after the incident. Data sources include: monitoring alerts, change records, log entries, deployment events, and CMDB changes.
  3. AI Root Cause IdentificationThe AI analyzes the correlated timeline and identifies the most likely root cause with a confidence score. It considers: temporal correlation, change proximity, historical patterns, and causal chain analysis.
  4. Review Evidence & ArtifactsEach RCA report includes: timeline visualization, change log (was anything deployed before the incident?), alert history, affected CI map, log excerpts, and the AI's reasoning chain (Explainable AI — showing why it reached its conclusion).
  5. Export RCA ReportExport the complete RCA report as PDF or HTML for stakeholders. The report includes: executive summary, detailed timeline, root cause analysis, contributing factors, corrective actions, and preventive recommendations.
  6. Create Follow-Up ActionsFrom the RCA report, create: problem records, change requests (for permanent fixes), knowledge articles (documenting the root cause), and action items assigned to specific teams.

RCA Data Sources

SourceDataPurpose
MonitoringAlert history, metric trends, anomaliesWhen did symptoms first appear?
Change RecordsRecent changes on affected CIsWas the incident caused by a change?
CMDBCI relationships, dependenciesWhat was the blast radius?
Deployment LogsCI/CD events, release notesWas new code deployed recently?
System Logssyslog, event log, application logsWhat errors occurred at the time?
Audit LogsUser actions, access eventsWas there unauthorized or unexpected access?
US

User Sync Architecture

How user data flows between source directories, the MinusNow application, and client servers.

Three-Tier Sync Architecture

Source Directory
(AD, Azure AD, Okta)
MinusNow App Server
(User DB, Role Engine)
Client Servers
(Agents, Local Auth)

Sync Requirements by Tier

1. Source → Application

What syncs: User accounts, group memberships, organizational structure, manager hierarchy, account status (enabled/disabled), authentication credentials (via SSO redirect, not password sync).

Requirements:

  • ProtocolLDAPS, SCIM 2.0, or MS Graph API
  • NetworkApp server must reach directory on port 636 or 443
  • Service AccountRead-only directory service account
  • FrequencyEvery 15 min (delta) + daily full sync

2. Application → Client Servers

What syncs: Agent configuration (which user has access to run commands on which hosts), role-based permissions (who can trigger auto-healing), monitoring exemptions, and maintenance window awareness.

Requirements:

  • ProtocolHTTPS (port 8443) with mTLS
  • NetworkClient servers must reach app server on port 8443
  • AgentMinusNow agent running as mnow-agent
  • FrequencyReal-time push + periodic pull (5 min)

Data Flow Detail

DataSourceDestinationDirectionFrequency
User accountsAD / Azure ADMinusNow App DBSource → AppEvery 15 min
Group membershipsAD / Azure ADMinusNow Role EngineSource → AppEvery 15 min
SSO tokensIdP (SAML/OIDC)User browser → AppRedirect flowOn login
Agent configMinusNow AppClient agentsApp → ClientReal-time push
RBAC policiesMinusNow AppClient agentsApp → ClientOn change + 5 min poll
Telemetry dataClient agentsMinusNow App DBClient → AppEvery 30-60s
Vulnerability dataClient agentsMinusNow App DBClient → AppOn package change + daily
Security

All sync communication uses TLS 1.3 encryption. Agent-to-server communication uses mutual TLS (mTLS) with auto-rotated certificates. No passwords are synced — authentication is always handled via the identity provider using SAML 2.0 or OIDC. API tokens are encrypted at rest with AES-256-GCM.