Comment on page
Security Practices for Grant Tools
We conduct regular testing to ensure software quality and security. We monitor third-party dependencies for security updates through automation, integrating them into upcoming releases. Code changes are unit tested and manually tested before production releases.
Release artifacts are cryptographically signed and automatically verified before use in production environments.
Backups of data at rest enable point-in-time restoration in case of loss or corruption.
Core maintainers ensure the security of contributed solutions. Automated tools and human review assess code contributions. New features are planned collaboratively with our maintainers, and submissions are reviewed against documented implementation plans, including acceptance criteria for ensuring quality control. We follow established procedures to address potential risks and provide retrospective analyses after security incidents.
Staging, QA, and production environments are provisioned solely through automated infrastructure-as-code processes. This ensures infrastructure changes are peer-reviewed, versioned, and auditably tracked.
Authentication uses OIDC Connect with GitHub-signed public keys rather than static credentials, reducing the risks of compromised credentials.
Access Control Policies
Access policies follow the principle of least privilege, granting the minimum permissions required for each service and user. All policy changes undergo auditing before deployment.
Users authenticate through email exchange for a time-limited credential. Partner organizations manage user emails and enforce access controls. USDR does not require a second authentication factor to receive these credentials; we expect our partners to enforce access controls for their users' email accounts according to their individual security requirements, which may (and should) include multiple authentication factors (MFA).
Partner organization administrators set role-based user permissions.
All USDR volunteers who collaborate with partners agree to security commitments outlined in our standard volunteer oath. Volunteers do not access protected environments, so our security efforts focus on code review rather than device-level controls. This review-based approach is typical for open-source software projects.
Access to remote systems like APIs, databases, and file stores is logged and retained encrypted for at least 30 days. Most logs are archived and kept for additional periods. Production systems access is also logged and audited. Requests must authenticate against explicitly granted roles. Access to all assumed roles is also logged, ensuring that requests can always be traced to a single, known actor. Outside of typical application scenarios, access requests are assessed case by case.
Network traffic and data are encrypted in transit and at rest using AES256 encryption. AWS manages all encryption keys. Data is encrypted in transit and at rest using industry-standard encryption. Zero-trust architecture safeguards cloud systems and data.
Automation detects and patches dependency upgrades using GitHub's Dependabot solution. Similar tools scan deployments for vulnerabilities.
Individuals may also notify vulnerabilities by contacting USDR directly.
In all cases, activities for patching all software (with or without vulnerabilities) are prioritized by the Grants team responsible for maintaining our software. However, most dependencies are upgraded automatically and tested in a lower environment before production deployment.
Separate development, testing, and production environments exist, with no implicit cross-environment trust. Access is explicitly granted through infrastructure-as-code policies.
Users only access data via authenticated API requests, not direct database contact.
Users can only access data associated with their organization, enforced programmatically by server-side API request handling. Data is logically segmented this way for each client.
Data is not physically segmented per tenant. AWS manages all decisions related to physical segmentation.
AWS WAF (Web Application Firewall) protects public entry points to our software from web attacks by blocking requests originating from sources known to be malicious and by inspecting traffic for known malicious patterns (e.g., SQL injections, common vulnerability sniffing, and cross-site scripting attacks).
We use AWS services like API Gateway, CloudFront CDN, and AWS WAF to protect against distributed denial of service attacks and other malicious traffic.
Internet traffic is restricted to TCP port 443 (HTTPS) requests and allowed only to reach necessary endpoints like APIs and CDNs. Internally, systems connect through VPC security groups and network ACLs provisioned for each pair of connected systems.
We follow zero trust architecture principles requiring all service-to-service requests to authenticate, encrypt traffic, and authorize based on role - no implicit trust based on network login.
Datadog and AWS tools monitor security events. Automated static application security testing is conducted against code contributions before acceptance and deployment.
Security threats are assessed, contained, communicated, mitigated, and analyzed retrospectively, following USDR Security Principles.
We do not have ongoing access to client systems and, therefore, cannot provide third parties with access. The only exception is AWS, which requires access to data centers and infrastructure management. Access to client-specific data at rest is not granted to third parties besides AWS.
We use Datadog for processing and storing application logs. Before logging, all personally identifiable or sensitive organization information is redacted. If any logs are found to contain sensitive data, we will scrub the data retroactively or delete it entirely from Datadog's systems.
Any access granted to third-party vendors is narrowly scoped and restricted. At this time, third parties have no access to raw client data.
Monitoring is done during contract renewal or updates.