An automated posting manager and scheduler, originally built as an extension for KlipMachine.
The project solves a specific problem: automating video uploads to platforms like TikTok Studio without relying on official APIs (which often limit reach or features) and without triggering bot detection systems.
It consists of a user-friendly Streamlit dashboard to manage the content queue, and a robust background Python worker that drives a stealth browser to perform the actual uploads using human-like interactions.
The interface allows for single-post scheduling or bulk processing via ZIP archives.
- Single Post Mode: Upload an MP4 video. You can manually write the description or use the AI Tagger.
- AI Tagger: Integrates with the Groq API (Llama-3) to auto-generate viral descriptions and hashtags based on the video filename.
- Smart Scheduling: An "Auto" button finds the next available optimal posting slot (12:00, 18:00, or 22:00), respecting a hard cap of 4 posts per day to avoid spam penalties.
- Bulk Mode (Auto-Pilot): Drop a ZIP file containing multiple videos. The system automatically extracts the files, generates AI metadata for each, and maps them across the week using the smart scheduling algorithm.
A comprehensive view of the SQLite database. Posts can be rescheduled or cancelled on the fly.
- Calendar View: Visual overview of the month using FullCalendar.
- List View: Detailed status of each post (Scheduled, Posted, Error).
Standard automation scripts are easily detected by modern web platforms. The worker process bypasses this using several custom implementations:
- BioMouse: Instead of moving the cursor in straight lines, the script uses a pre-recorded JSON library of actual human mouse movements (
human_library.json). When the bot needs to click a button, it selects a gesture from the library, calculates the required distance and angle, and applies a geometric transformation to map the human trajectory to the target coordinates. - Typing Simulation: Simulates variable keystroke delays, pauses between words, and even occasional typos followed by immediate backspace corrections.
- Stealth Context: Uses a persistent Chrome profile (
tiktok_profile/) to retain cookies and session data, minimizing login prompts and captcha triggers.
The worker process spends most of its time sleeping between scheduled posts. To avoid polling the database constantly, it uses OS-level signals.
When a new post is scheduled via the dashboard, Streamlit finds the worker's PID and sends a SIGUSR1 signal. The worker catches this signal, interrupts its current sleep cycle, recalculates the next due date, and goes back to sleep. This ensures the schedule is always up to date without wasting CPU cycles or requiring manual restarts.
Worker calculating initial sleep time:

Worker reacting to a schedule change via SIGUSR1:

- Clone the repository and navigate to the folder.
- Create and activate a virtual environment.
python3 -m venv venv
source venv/bin/activate- Install the required Python packages.
pip install -r requirements.txt- Install Playwright browser binaries.
playwright install- Create a
.envfile in the root directory and add your Groq API key for the AI tagger.
GROQ_API_KEY=your_api_key_here
Before running any component, initialize the SQLite database structure. This creates the required folders and tables.
python3 engine/db_manager.pyYou need to log in to your account manually to save the session cookies. Run the setup script:
python3 engine/setup_login.pyA browser will open. Log in, browse for a few seconds, and close the window. The session is now saved in tiktok_profile/.
The bot relies on a library of actual human mouse movements to evade detection. You need to record your own gestures before running the worker.
First, run the recorder:
python3 engine/recorder.pyMove your mouse naturally across the screen. Press the SPACE bar to mark the end of a gesture. Repeat this a few dozen times to create a diverse dataset. Press ESC to save the raw data.
Then, process the raw data into a normalized library:
python3 engine/process_data.pyThis will generate the data/human_library.json file required by the BioMouse module.
You need to run the dashboard and the worker in two separate terminals. Ensure your virtual environment is activated in both.
Terminal 1 (Dashboard):
streamlit run dashboard/app.pyTerminal 2 (Worker):
python3 engine/worker.pyThe system is now fully operational. Drop your videos in the dashboard and let the worker handle the rest.



