Add filter, add READMES

This commit is contained in:
Oxy8
2026-03-06 15:35:04 -03:00
parent b44867abfa
commit 3c487d088b
56 changed files with 2495 additions and 1424 deletions

108
README.md Normal file
View File

@@ -0,0 +1,108 @@
# Visualizador Instanciados
This repo is a Docker Compose stack for visualizing large RDF/OWL graphs stored in **AnzoGraph**. It includes:
- A **Go backend** that queries AnzoGraph via SPARQL and serves a cached graph snapshot + selection queries.
- A **React/Vite frontend** that renders nodes/edges with WebGL2 and supports “selection query” + “graph query” modes.
- A **Python one-shot service** to combine `owl:imports` into a single Turtle file.
- An **AnzoGraph** container (SPARQL endpoint).
## Quick start (Docker Compose)
1) Put your TTL file(s) in `./data/` (this folder is volume-mounted into AnzoGraph as `/opt/shared-files`).
2) Optionally configure `.env` (see `.env.example`).
3) Start the stack:
```bash
docker compose up --build
```
Then open the frontend:
- `http://localhost:5173`
Stop everything:
```bash
docker compose down
```
## Services
Defined in `docker-compose.yml`:
- `anzograph` (image `cambridgesemantics/anzograph:latest`)
- Ports: `8080`, `8443`
- Shared files: `./data → /opt/shared-files`
- `backend` (`./backend_go`)
- Port: `8000` (API under `/api/*`)
- Talks to AnzoGraph at `SPARQL_HOST` / `SPARQL_ENDPOINT`
- `frontend` (`./frontend`)
- Port: `5173`
- Proxies `/api/*` to `VITE_BACKEND_URL`
- `owl_imports_combiner` (`./python_services/owl_imports_combiner`)
- One-shot: optionally produces a combined TTL by following `owl:imports`
Service READMEs:
- `backend_go/README.md`
- `frontend/README.md`
- `python_services/owl_imports_combiner/README.md`
- `anzograph/README.md`
## Repo layout
- `backend_go/` Go API service (SPARQL → snapshot + selection queries)
- `frontend/` React/Vite WebGL renderer
- `python_services/owl_imports_combiner/` Python one-shot OWL imports combiner
- `data/` local shared volume for TTL inputs/outputs (gitignored)
- `docker-compose.yml` service wiring
- `flake.nix` optional Nix dev shell
## Configuration
This repo expects a local `.env` file (not committed). Start from `.env.example`.
Common knobs:
- Backend snapshot size: `DEFAULT_NODE_LIMIT`, `DEFAULT_EDGE_LIMIT`, `MAX_NODE_LIMIT`, `MAX_EDGE_LIMIT`
- SPARQL connectivity: `SPARQL_HOST` or `SPARQL_ENDPOINT`, plus `SPARQL_USER` / `SPARQL_PASS`
- Load data on backend startup: `SPARQL_LOAD_ON_START=true` with `SPARQL_DATA_FILE=file:///opt/shared-files/<file>.ttl`
- Frontend → backend proxy: `VITE_BACKEND_URL`
## API (backend)
Base URL: `http://localhost:8000`
- `GET /api/health` liveness
- `GET /api/stats` snapshot stats (uses default limits)
- `GET /api/graph` graph snapshot
- Query params: `node_limit`, `edge_limit`, `graph_query_id`
- `GET /api/graph_queries` available graph snapshot modes (`graph_query_id` values)
- `GET /api/selection_queries` available selection-highlight modes (`query_id` values)
- `POST /api/selection_query` run a selection query for highlighted neighbors
- Body: `{"query_id":"neighbors","selected_ids":[...],"node_limit":...,"edge_limit":...,"graph_query_id":"default"}`
- `POST /api/sparql` raw SPARQL passthrough (debug/advanced)
- `POST /api/neighbors` legacy alias (same behavior as `query_id="neighbors"`)
## Frontend UI
- Mouse:
- Drag: pan
- Scroll: zoom
- Click: select nodes
- **Top-right buttons:** “selection query” mode (how neighbors/highlights are computed for the current selection)
- **Bottom-right buttons:** “graph query” mode (which SPARQL edge set is used to build the graph snapshot; switching reloads the graph)
## Notes on performance/limits
- The backend caches snapshots in memory; tune `DEFAULT_*_LIMIT` if memory is too high.
- The frontend renders a sampled subset when zoomed out, and only draws edges when fewer than ~20k nodes are visible.
## Nix dev shell (optional)
If you use Nix, `flake.nix` provides a minimal `devShell`:
```bash
nix develop
```