Skip to content
Snippets Groups Projects
  1. Mar 11, 2016
  2. Mar 07, 2016
  3. Feb 19, 2016
  4. Feb 18, 2016
  5. Feb 17, 2016
  6. Feb 10, 2016
  7. Feb 09, 2016
  8. Feb 05, 2016
  9. Jan 20, 2016
  10. Jan 17, 2016
  11. Jan 11, 2016
  12. Dec 28, 2015
  13. Dec 27, 2015
  14. Dec 23, 2015
  15. Dec 18, 2015
    • Kamil Trzciński's avatar
      Change pages domain to host · 23272ee1
      Kamil Trzciński authored
      23272ee1
    • Kamil Trzciński's avatar
      ad93eafb
    • Kamil Trzciński's avatar
      Add GitLab Pages · 650d6a63
      Kamil Trzciński authored
      - The pages are created when build artifacts for `pages` job are uploaded
      - Pages serve the content under: http://group.pages.domain.com/project
      - Pages can be used to serve the group page, special project named as host: group.pages.domain.com
      - User can provide own 403 and 404 error pages by creating 403.html and 404.html in group page project
      - Pages can be explicitly removed from the project by clicking Remove Pages in Project Settings
      - The size of pages is limited by Application Setting: max pages size, which limits the maximum size of unpacked archive (default: 100MB)
      - The public/ is extracted from artifacts and content is served as static pages
      - Pages asynchronous worker use `dd` to limit the unpacked tar size
      - Pages needs to be explicitly enabled and domain needs to be specified in gitlab.yml
      - Pages are part of backups
      - Pages notify the deployment status using Commit Status API
      - Pages use a new sidekiq queue: pages
      - Pages use a separate nginx config which needs to be explicitly added
      650d6a63
  16. Dec 17, 2015
    • Yorick Peterse's avatar
      Only track method calls above a certain threshold · a41287d8
      Yorick Peterse authored
      This ensures we don't end up wasting resources by tracking method calls
      that only take a few microseconds. By default the threshold is 10
      milliseconds but this can be changed using the gitlab.yml configuration
      file.
      a41287d8
    • Yorick Peterse's avatar
      Storing of application metrics in InfluxDB · 141e946c
      Yorick Peterse authored
      This adds the ability to write application metrics (e.g. SQL timings) to
      InfluxDB. These metrics can in turn be visualized using Grafana, or
      really anything else that can read from InfluxDB. These metrics can be
      used to track application performance over time, between different Ruby
      versions, different GitLab versions, etc.
      
      == Transaction Metrics
      
      Currently the following is tracked on a per transaction basis (a
      transaction is a Rails request or a single Sidekiq job):
      
      * Timings per query along with the raw (obfuscated) SQL and information
        about what file the query originated from.
      * Timings per view along with the path of the view and information about
        what file triggered the rendering process.
      * The duration of a request itself along with the controller/worker
        class and method name.
      * The duration of any instrumented method calls (more below).
      
      == Sampled Metrics
      
      Certain metrics can't be directly associated with a transaction. For
      example, a process' total memory usage is unrelated to any running
      transactions. While a transaction can result in the memory usage going
      up there's no accurate way to determine what transaction is to blame,
      this becomes especially problematic in multi-threaded environments.
      
      To solve this problem there's a separate thread that takes samples at a
      fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
      currently tracks the following:
      
      * The process' total memory usage.
      * The number of file descriptors opened by the process.
      * The amount of Ruby objects (using ObjectSpace.count_objects).
      * GC statistics such as timings, heap slots, etc.
      
      The default/current interval is 15 seconds, any smaller interval might
      put too much pressure on InfluxDB (especially when running dozens of
      processes).
      
      == Method Instrumentation
      
      While currently not yet used methods can be instrumented to track how
      long they take to run. Unlike the likes of New Relic this doesn't
      require modifying the source code (e.g. including modules), it all
      happens from the outside. For example, to track `User.by_login` we'd add
      the following code somewhere in an initializer:
      
          Gitlab::Metrics::Instrumentation.
            instrument_method(User, :by_login)
      
      to instead instrument an instance method:
      
          Gitlab::Metrics::Instrumentation.
            instrument_instance_method(User, :save)
      
      Instrumentation for either all public model methods or a few crucial
      ones will be added in the near future, I simply haven't gotten to doing
      so just yet.
      
      == Configuration
      
      By default metrics are disabled. This means users don't have to bother
      setting anything up if they don't want to. Metrics can be enabled by
      editing one's gitlab.yml configuration file (see
      config/gitlab.yml.example for example settings).
      
      == Writing Data To InfluxDB
      
      Because InfluxDB is still a fairly young product I expect the worse.
      Data loss, unexpected reboots, the database not responding, you name it.
      Because of this data is _not_ written to InfluxDB directly, instead it's
      queued and processed by Sidekiq. This ensures that users won't notice
      anything when InfluxDB is giving trouble.
      
      The metrics worker can be started in a standalone manner as following:
      
          bundle exec sidekiq -q metrics
      
      The corresponding class is called MetricsWorker.
      141e946c
  17. Dec 16, 2015
  18. Dec 15, 2015
  19. Dec 14, 2015
  20. Dec 07, 2015
  21. Dec 02, 2015
    • Jacob Vosmaer's avatar
      Improve reliability of LdapSyncWorker · 35912d59
      Jacob Vosmaer authored
      First of all, Sidekiq job retries are not needed because this is a
      recurring job. Second of all, we add the option to run once every
      X days instead of once every day. This helps when the job takes
      close to or more than 24 hours to complete.
      35912d59
  22. Nov 23, 2015
  23. Nov 20, 2015
  24. Nov 19, 2015
  25. Nov 16, 2015
  26. Nov 13, 2015
  27. Nov 03, 2015
  28. Oct 26, 2015
Loading