4 Commits

8 changed files with 332 additions and 69 deletions

164
AGENTS.md Normal file
View File

@@ -0,0 +1,164 @@
# AGENTS.md
Agent guidance for this repository (`Soul Droid Chat`).
## Quick Start for New Agents
1. Read `game/script.rpy`, `game/screens.rpy`, and `game/llm_ren.py` to understand flow and LLM integration.
2. Run `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" lint` before making changes.
3. Make small, focused edits that preserve Ren'Py ids and the `EMOTION:<value>` response contract.
4. Validate with `compile` and `test` (`test <testcase_or_suite_name>` for a single case while iterating).
5. Avoid committing generated artifacts/logs (`*.rpyc`, `game/cache/`, `log.txt`, `errors.txt`, `traceback.txt`) or secrets.
6. In your handoff, include what changed, why, and exact validation commands run.
## Project Snapshot
- Engine: Ren'Py 8.5.x project with script files in `game/*.rpy`.
- Python support code is embedded in Ren'Py files and `game/*_ren.py`.
- Runtime dependency: LM Studio server (default `http://localhost:1234`).
- No dedicated `tests/` directory was found; use Ren'Py `lint` and `test`.
## Repository Layout
- `game/script.rpy`: main dialogue/game flow loop.
- `game/screens.rpy`: UI screens, preferences, and menus.
- `game/options.rpy`: Ren'Py/build/runtime config and defaults.
- `game/llm_ren.py`: LLM call logic, parsing, and sanitizing.
- `game/constants_ren.py`: emotion synonym table.
- `project.json`, `android.json`: distribution/build settings.
## Tooling and Environment
- Expected Ren'Py launcher script in PATH: `renpy.sh`.
- CLI usage pattern:
- `"renpy.sh" "<repo-path>" <command> [args]`
- Python in environment is available (`python3`), but gameplay validation should use Ren'Py commands.
## Build, Lint, and Test Commands
Run all commands from repository root:
`/home/$USER/Documentos/Renpy Projects/Soul Droid Chat`
### Quick checks (most common)
- Lint project:
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" lint`
- Run game locally:
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" run`
- Compile scripts/python cache:
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" compile`
### Test execution (Ren'Py test runner)
- Run default/global test suite:
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" test`
- Run a single test suite or testcase (important):
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" test <testcase_or_suite_name>`
- Show detailed test report:
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" test --report-detailed`
- Run all testcases even if disabled:
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" test --enable-all`
### Distribution/build packaging
- Create distributions (launcher command):
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" distribute`
- Android build exists as a Ren'Py command (if SDK/keystore configured):
- `"renpy.sh" "/home/$USER/Documentos/Renpy Projects/Soul Droid Chat" android_build`
## Suggested Validation Sequence for Agents
1. `lint`
2. `compile`
3. `test` (or `test <single_case>` when iterating)
4. `run` for a manual smoke check on the edited flow/screen
If a command fails, include the failing command and key error excerpt in your report.
## Code Style Guidelines
### General principles
- Match existing Ren'Py and Python style in nearby files.
- Prefer small, focused changes; avoid unrelated refactors.
- Keep user-facing narrative tone and Anita persona constraints intact.
- Preserve ASCII-oriented output behavior where current code expects it.
### Ren'Py script/style conventions (`.rpy`)
- Use 4-space indentation; never tabs.
- Keep labels/screen names `snake_case` (`label start`, `screen quick_menu`).
- Keep style declarations grouped and readable (existing file order is a good template).
- For dialogue flow, keep Python blocks short; move reusable logic to `*_ren.py`.
- Use explicit keyword args when clarity helps (`prompt =`, `fadeout`, `fadein`).
- Do not rename core ids required by Ren'Py (`_("window")`, `_("what")`, `_("who")`, `_("input")`).
### Python conventions (`*_ren.py` and embedded `init python`)
- Imports:
- Standard library first (`re`).
- Ren'Py modules (`renpy`, `persistent`).
- Local imports last (`from .constants_ren import SYNONYMS`).
- Naming:
- Functions/variables: `snake_case`.
- Constants: `UPPER_SNAKE_CASE` (`EMOTIONS`, `SYSTEM_PROMPT`).
- Keep names descriptive (`parse_emotion`, `sanitize_speech`, `fetch_llm`).
- Types:
- Add type hints on new/edited Python functions when practical.
- Keep return types accurate (for list-returning functions, annotate accordingly).
- Formatting:
- Follow PEP 8 basics for spacing, line breaks, and readability.
- Keep multiline literals and dicts formatted consistently with existing file style.
### Error handling and resilience
- Wrap external boundary calls (network/LM Studio) with `try/except`.
- Return safe fallback values that keep game loop stable.
- Error messages should be concise and actionable for debugging.
- Avoid swallowing exceptions silently; at minimum return or log context.
- Preserve conversation continuity fields when present (`last_response_id`).
### LLM integration constraints
- Keep `SYSTEM_PROMPT` format guarantees consistent unless intentionally changing behavior.
- Maintain `EMOTION:<value>` parsing contract used by dialogue rendering.
- If you add emotions, update both:
- `EMOTIONS` in `game/llm_ren.py`
- `SYNONYMS` in `game/constants_ren.py`
- Keep speech sanitization aligned with UI/rendering constraints.
### UI/screens changes
- Reuse existing `gui.*` variables and style helpers where possible.
- Keep mobile/small variant handling (`renpy.variant(_("small"))`) intact.
- Prefer extending existing screens over introducing parallel duplicate screens.
- For settings UI, follow patterns already used in `preferences` screen.
## Agent Working Rules
- Before edits:
- Read related files fully (at least the touched blocks and nearby context).
- Check for existing patterns and follow them.
- During edits:
- Do not include secrets or API keys in committed files.
- Do not commit generated caches/logs (`*.rpyc`, `log.txt`, `errors.txt`, `traceback.txt`).
- After edits:
- Run relevant validation commands from the section above.
- Summarize what changed, why, and what was validated.
## Git and Change Scope
- Keep commits scoped to the requested task.
- Avoid touching binary assets unless the task explicitly requires it.
- If a keystore or credential-like file is changed, call it out explicitly.
- Do not rewrite history unless explicitly requested.
## Cursor/Copilot Rules Status
- No repository-specific Cursor rules were found:
- `.cursor/rules/` not present.
- `.cursorrules` not present.
- No repository-specific Copilot instruction file found:
- `.github/copilot-instructions.md` not present.
If any of the above files are added later, treat them as higher-priority constraints and update this document.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 108 KiB

View File

@@ -11,15 +11,35 @@ init python:
import re import re
EMOTION_REGEX = re.compile(r"EMOTION:\w+")
EMOTION_TOKEN_REGEX = re.compile(rf"{EMOTION_REGEX.pattern} ?")
EMOJI_REGEX = re.compile(
"["
"\U0001f1e6-\U0001f1ff" # flags
"\U0001f300-\U0001f5ff" # symbols and pictographs
"\U0001f600-\U0001f64f" # emoticons
"\U0001f680-\U0001f6ff" # transport and map
"\U0001f900-\U0001f9ff" # supplemental symbols and pictographs
"\U0001fa70-\U0001faff" # symbols and pictographs extended
"\U00002702-\U000027b0" # dingbats
"\U0001f3fb-\U0001f3ff" # skin tone modifiers
"\u200d" # zero-width joiner
"\ufe0f" # emoji variation selector
"]+",
flags=re.UNICODE,
)
EMOTIONS = [ EMOTIONS = [
'happy', "happy",
'sad', "sad",
'surprised', "surprised",
'embarrassed', "embarrassed",
'flirty', "flirty",
'angry', "angry",
'thinking', "thinking",
'confused' "confused",
] ]
SYSTEM_PROMPT = """ SYSTEM_PROMPT = """
@@ -54,6 +74,12 @@ EMOTION:happy Hey dummy! Sorry to barge in! Ya feel like hanging out?\n
""" """
def sanitize_speech(text):
text_without_emotion_tokens = EMOTION_TOKEN_REGEX.sub("", text)
return EMOJI_REGEX.sub("", text_without_emotion_tokens)
def parse_emotion(line): def parse_emotion(line):
def _normalize_emotion(em): def _normalize_emotion(em):
# If not a valid emotion, then search for a match in the # If not a valid emotion, then search for a match in the
@@ -67,14 +93,14 @@ def parse_emotion(line):
return em return em
try: try:
e = re.compile(r'EMOTION:\w+') m = EMOTION_REGEX.match(line)
m = e.match(line)
if m is not None: if m is not None:
emotion = m.group().split(':')[1] emotion = m.group().split(":")[1]
text = line[m.span()[1]:] text = line[m.span()[1]:]
sanitized = sanitize_speech(text)
return _normalize_emotion(emotion), text return _normalize_emotion(emotion), sanitized
return None, line return None, line
@@ -82,34 +108,88 @@ def parse_emotion(line):
return None, str(e) return None, str(e)
def sanitize_speech(text): def set_model_capabilities() -> bool:
# This removes all non-ASCII characters (useful for emojis) """
return text.encode('ascii', 'ignore').decode('ascii') LM Studio throws Bad Request if the reasoning flag is set for a model
that doesn't support it. This method tries to determine if the currently
configured model supports reasoning to signal to the fetch_llm function
disable it.
"""
try:
headers = {"Authorization": f"Bearer {persistent.api_key}"}
data = {
"model": persistent.model,
"input": "Start the conversation.",
"reasoning": "off",
"system_prompt": SYSTEM_PROMPT,
}
renpy.fetch(
f"{persistent.base_url}/api/v1/chat",
headers=headers,
json=data,
result="json",
)
except renpy.FetchError as fe:
# renpy.fetch returned a BadRequest, assume this means LM Studio
# rejected the request because the model doesn't support the
# reasoning setting in chat.
if hasattr(fe, "status_code") and fe.status_code == 400:
persistent.disable_reasoning = False
return True, None
else:
return False, str(fe)
except Exception as e:
# Something else happened.
return False, str(e)
else:
# The fetch worked, so the reasoning setting is available.
persistent.disable_reasoning = True
return True, None
def fetch_llm(message: str) -> str: def fetch_llm(message: str) -> str:
"""
Queries the chat with a model endpoint of the configured LM Studio server.
"""
global last_response_id global last_response_id
try: try:
# Set basic request data. # Set request data.
headers = {"Authorization": f"Bearer {persistent.api_key}"} headers = {"Authorization": f"Bearer {persistent.api_key}"}
data = {"model": persistent.model, data = {
"input": message, "model": persistent.model,
"system_prompt": SYSTEM_PROMPT} "input": message,
"system_prompt": SYSTEM_PROMPT,
}
if persistent.disable_reasoning:
data["reasoning"] = "off"
# Add the previous response ID if any to continue the conversation. # Add the previous response ID if any to continue the conversation.
if last_response_id is not None: if last_response_id is not None:
data["previous_response_id"] = last_response_id data["previous_response_id"] = last_response_id
response = renpy.fetch(f"{persistent.base_url}/api/v1/chat", # Fetch from LM Studio and parse the response.
headers=headers, response = renpy.fetch(
json=data, f"{persistent.base_url}/api/v1/chat",
result="json") headers=headers,
json=data,
result="json",
)
last_response_id = response["response_id"] last_response_id = response["response_id"]
text = response["output"][0]["content"] text = response["output"][0]["content"]
return text.split('\n') return text.split("\n")
except Exception as e: except Exception as e:
return [f'Failed to fetch with error: {e}'] return [f"Failed to fetch with error: {e}"]

View File

@@ -23,7 +23,7 @@ define gui.show_name = True
## The version of the game. ## The version of the game.
define config.version = "0.2" define config.version = "0.3"
## Text that is placed on the game's about screen. Place the text between the ## Text that is placed on the game's about screen. Place the text between the
@@ -84,17 +84,18 @@ define config.intra_transition = dissolve
## A transition that is used after a game has been loaded. ## A transition that is used after a game has been loaded.
define config.after_load_transition = None define config.after_load_transition = dissolve
## Used when entering the main menu after the game has ended. ## Used when entering the main menu after the game has ended.
define config.end_game_transition = None define config.end_game_transition = dissolve
## A variable to set the transition used when the game starts does not exist. ## A variable to set the transition used when the game starts does not exist.
## Instead, use a with statement after showing the initial scene. ## Instead, use a with statement after showing the initial scene.
define config.end_splash_transition = dissolve
## Window management ########################################################### ## Window management ###########################################################
## ##
@@ -217,3 +218,4 @@ define config.minimum_presplash_time = 2.0
default persistent.base_url = 'http://localhost:1234' default persistent.base_url = 'http://localhost:1234'
default persistent.api_key = '' default persistent.api_key = ''
default persistent.model = 'gemma-3-4b-it' default persistent.model = 'gemma-3-4b-it'
default persistent.disable_reasoning = False

View File

@@ -1,39 +1,56 @@
define a = Character("Anita", color = "#aaaa00", callback = speaker("a"), image = "anita") define a = Character("Anita", color = "#aaaa00", callback = speaker("a"), image = "anita")
label start: label start:
play music ["zeropage_ambiphonic303chilloutmix.mp3", stop music fadeout 1.0
"zeropage_ambientdance.mp3", scene bg room
"zeropage_ambiose.mp3" ] fadeout 0.5 fadein 0.5 with Dissolve(2.0)
scene bg room $ success, error = set_model_capabilities()
show anita happy with dissolve
if not success:
python: call failure(error) from _call_failure
response = fetch_llm('Start the conversation.')[0] return
sanitized = sanitize_speech(response)
emotion, line = parse_emotion(sanitized) play music ["zeropage_ambiphonic303chilloutmix.mp3",
"zeropage_ambientdance.mp3",
a "[line]" "zeropage_ambiose.mp3" ] fadeout 0.5 fadein 0.5
while True: show anita happy with dissolve
python:
message = renpy.input(prompt = "What do you say to her?") python:
response = fetch_llm(message) response = fetch_llm('Start the conversation.')[0]
i = 0 emotion, line = parse_emotion(response)
while i < len(response): a "[line]"
python:
r = response[i].strip() while True:
s = sanitize_speech(r) python:
message = renpy.input(prompt = "What do you say to her?")
if s != '': response = fetch_llm(message)
$ emotion, line = parse_emotion(s) i = 0
if emotion is not None: while i < len(response):
show expression f'anita {emotion}' python:
r = response[i].strip()
a "[line]"
if r != '':
$ i += 1 $ emotion, line = parse_emotion(r)
return if emotion is not None:
show expression f'anita {emotion}'
a "[line]"
$ i += 1
return
label failure(error):
"""Alas! Figuring out the capabilities of the configured model failed with the following error.
[error]
Unfortunately the program cannot continue, returning to the main menu."""
return

BIN
icon.icns Normal file

Binary file not shown.

BIN
icon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

View File

@@ -1,9 +1,9 @@
{ {
"build_update": false, "build_update": false,
"packages": [ "packages": [
"win",
"linux", "linux",
"mac" "mac",
"win"
], ],
"add_from": true, "add_from": true,
"force_recompile": true, "force_recompile": true,