& Security Tooling
for Latent Space Applications, Language Models, etc
<warning> Improper Monitoring of Language Models & Latent Space Applications poses an Existential Risk via Universal, Transferable, and Automated Attack-Strings; Secure your Environments ASAP </warning>
Language Models have non-patchable vulnerabilities given shared lineage and function
(e.g. Transformers, Common Crawl, etc).
Attacks can be Automatically Customized by malicious actors to affect specific ends
(e.g. Privilege Escalation, Data Extraction, etc).
Open-Licensure & Distribution
Latent Space Tools are made available under the Apache 2 license via Github
Data Enrichment, Monitoring & Clustering
Note: Actively developing models designed as additional pre-processing to differentiate attack strings vs parameterized URLs; also looking to develop membership and attribute inference attacks as pipelines to affect point-forward GDPR compliant 'forgetting' for DNNs utilizing open-source tools like WeightWatcher.ai for layer-specific validation.
more details available on GitHub