Problem to solve
It is inconvenient to insert badges.
But the badges are usually the same.
So it is proposed to: 1 create db tables storing
create table `badges_presets_inheritance` (`repo_id` INT FOREIGN KEY, `inherited_repo_id` INT FOREIGN KEY);
create table `badges_presets_services` (`repo_id` INT FOREIGN KEY, `service_id` INT PRIMARY KEY, `service name` TEXT);
create table `badges_presets` (`badge_id` INT PRIMARY_KEY, `badge_name` TEXT, `service_id` INT FOREIGN KEY, `image_uri` TEXT, `link_uri` TEXT);2 create an account setting selecting a repo with presets 3 create an org setting selecting a repo with presets. Used 4 create a site setting selecting a repo with presets. Used as default for the new accounts. 5 a repo suitable for selection is the one cointaining 3 files: license, readme and a yaml file with presets like this:
import: - site_default # replaced by the default repo id in site settings - org_default # replaced by the default repo id in cointaining org settings services: coveralls: coverage: image: "...." link: "...."
6 create the machinery validating and importing repo content. If validation or impat have failed, the data is to be removed from the tables. 7 create the machinery triggering validation and import of repo content on update of its main branch. 8 when a user selects a repo, the platform checks, if the contents of the repo is already imported into the tables, if it is not, it does it. 9. in badges settings a user can add a badge by typing a service name and clicking on the badge he wants without remembering any URIs. Instead of URIs badge id is stored. 10. when a badge is deleted from a repo, in all the repos using it its record is deleted. No replacement to URIs because it opens the system for trolling by vandaling foreign repos badges.
There may be some problems:
- Someone creates repos linked by imports into cycles.
- Someone creates long chains of repos Solution: allow inheritance only of repos either of higher level (site and org prefs) or of the same user (and check cycles) and limit count of these repos per user (so checking cycles should be feasible) by some small amount.
- someone uses extremily large files. Solution - do not process large files.
- someone updates repos constantly to create load on a service. Solution - limit rate of DB updates.
- what to do if dependencies are broken? Just ignore non-existing repos.
- load on DB. The proposed schema is normalized, but IDK how well it would perform.
What does success look like, and how can we measure that?
People use it and contribute to the main repo.