SonarQube

SonarQube: Overview

SonarQube is an automatic code review tool to detect bugs, vulnerabilities, and code smells in your code. It can integrate with your existing workflow to enable continuous code inspection project branches and pull requests.

SonarQube: Concepts

Architecture

Concept

Definition

Analyzer

A client application that analyzes the source code to compute snapshots.

Database

Stores configuration and snapshots.

Server

Web interface that is used to browse snapshot data and make configuration changes.

Quality

Issue types (bug, vulnerability, and code smell) are deprecated. Issues are now tied to Clean Code attributes and software qualities impacted. See Clean Code for more details.

Concept

Definition

Clean Code

Code whose attributes make your software reliable, secure, and maintainable. See Clean Code for more details.

Bug

An issue that represents something wrong in the code. If this has not broken yet, it will, and will probably break at the worst possible moment. This needs to be fixed as soon as possible.

Code smell

A maintainability-related issue in the code. Leaving it as-is means that at best, developers maintaining the code will have a harder time than they should when making changes. At worst, they'll be so confused by the state of the code that they'll introduce additional errors as they make changes.

Cost

See Remediation cost.

Debt

See Technical debt.

Issue

When a piece of code does not comply with a rule, an issue is logged on the snapshot. An issue can be logged on a source file or a unit test file.

Measure

The value of a metric for a given file or project at a given time. For example, 125 lines of code on class MyClass or, the density of duplicated lines = 30.5% on project myProject, can be considered a measure.

Metric

A type of measurement. Metrics can have varying values, or measures, over time. Examples: number of lines of code, complexity, etc.

A metric may be either qualitative (for example, the density of duplicated lines, line coverage by tests, etc.) or quantitative (for example, the number of lines of code, the complexity, etc.)

New code definition

A changeset or period that you're keeping a close watch on for the introduction of new problems in the code. Ideally, this is since the previous_version, but if you don't use a Maven-like versioning scheme, you may need to set a time period such as 21 days since a specific analysis or use a reference branch. See Defining new code for more details.

Quality profile

A set of rules. Each snapshot is based on a single quality profile. See also Quality profiles.

Rule

A coding standard or practice that should be followed. Not complying with coding rules can lead to issues and hotspots. Adherence to rules can be used to measure the quality of code files or unit tests.

Remediation cost

The estimated time required to fix vulnerability and reliability Issues.

Snapshot

A set of measures and issues on a given project at a given time. A snapshot is generated for each analysis.

Security hotspot

Security-sensitive pieces of code that need to be manually reviewed. Upon review, you'll either find that there is no threat or that there is vulnerable code that needs to be fixed.

Technical debt

The estimated time required to fix all maintainability issues and code smells.

Vulnerability

A security-related issue that represents a backdoor for attackers. See also Security-related rules.

SonarQube: Metric definitions

Complexity

Complexity (complexity): Complexity refers to Cyclomatic complexity, a quantitative metric used to calculate the number of paths through the code. Whenever the control flow of a function splits, the complexity counter gets incremented by one. Each function has a minimum complexity of 1. This calculation varies slightly by language because keywords and functionalities.

Language-specific details

Cognitive Complexity (cognitive_complexity): How hard it is to understand the code's control flow. See the Cognitive Complexity white paper for a complete description of the mathematical model applied to compute this measure.

Duplications

Duplicated blocks (duplicated_blocks): The number of duplicated blocks of lines.

Language-specific details

For a block of code to be considered as duplicated:

Non-Java projects:

  • There should be at least 100 successive and duplicated tokens.

  • Those tokens should be spread at least on:

  • 30 lines of code for COBOL

  • 20 lines of code for ABAP

  • 10 lines of code for other languages

Java projects: There should be at least 10 successive and duplicated statements whatever the number of tokens and lines. Differences in indentation and in string literals are ignored while detecting duplications.

Duplicated files (duplicated_files): The number of files involved in duplications.

Duplicated lines (duplicated_lines): The number of lines involved in duplications.

Duplicated lines (%) (duplicated_lines_density): duplicated_lines / (lines of code) * 100

Issues

The old severity feature is deprecated. Issue severity is now tied to the impact on the software qualities and cannot be changed. See Clean Code for more details.

New issues (new_violations): The number of issues raised for the first time on new code.

New xxx issues (new_xxx_violations): The number of issues of the specified severity raised for the first time on new code, where xxx is one of: blocker, critical, major, minor, info.

Issues (violations): The total count of issues in all states.

xxx issues (xxx_violations): The total count of issues of the specified severity, where xxx is one of: blocker, critical, major, minor, info.

False positive issues (false_positive_issues): The total count of issues marked false positive.

Open issues (open_issues): The total count of issues in the Open state.

Confirmed issues (confirmed_issues): The total count of issues in the Confirmed state.

Reopened issues (reopened_issues): The total count of issues in the Reopened state.

Maintainability

Issue types (bug, vulnerability, and code smell) are deprecated. Issues are now tied to Clean Code attributes and software qualities impacted. See Clean Code for more details.

Code smells (code_smells): The total count of code smell issues.

New code smells (new_code_smells): The total count of Code Smell issues raised for the first time on New Code.

Maintainability rating (sqale_rating): (Formerly the SQALE rating.) The rating given to your project related to the value of your Technical debt ratio. The default Maintainability rating grid is:

A=0-0.05, B=0.06-0.1, C=0.11-0.20, D=0.21-0.5, E=0.51-1

The Maintainability rating scale can be alternately stated by saying that if the outstanding remediation cost is:

  • <=5% of the time that has already gone into the application, the rating is A

  • between 6 to 10% the rating is a B

  • between 11 to 20% the rating is a C

  • between 21 to 50% the rating is a D

  • anything over 50% is an E

Technical debt (sqale_index): A measure of effort to fix all code smells. The measure is stored in minutes in the database. An 8-hour day is assumed when values are shown in days.

Technical debt on new code (new_technical_debt): a measure of effort required to fix all code smells raised for the first time on new code.

Technical debt ratio (sqale_debt_ratio): The ratio between the cost to develop the software and the cost to fix it. The Technical Debt Ratio formula is: Remediation cost / Development cost Which can be restated as: Remediation cost / (Cost to develop 1 line of code * Number of lines of code) The value of the cost to develop a line of code is 0.06 days.

Technical debt ratio on new code (new_sqale_debt_ratio): The ratio between the cost to develop the code changed on new code and the cost of the issues linked to it.

Quality gates

Quality gate status (alert_status): The state of the quality gate associated with your project. Possible values are ERROR and OK. Note: the WARN value has been removed since SonarQube 7.6.

Quality gate details (quality_gate_details): For all the conditions of your quality gate, you know which condition is failing and which is not.

Reliability

Issue types (bug, vulnerability, and code smell) are deprecated. Issues are now tied to Clean Code attributes and software qualities impacted. See Clean Code for more details.

Bugs (bugs): The total number of bug issues.

New Bugs (new_bugs): The number of new bug issues.

Reliability Rating (reliability_rating) A = 0 Bugs B = at least 1 Minor Bug C = at least 1 Major Bug D = at least 1 Critical Bug E = at least 1 Blocker Bug

Reliability remediation effort (reliability_remediation_effort): The effort to fix all bug issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days.

Reliability remediation effort on new code (new_reliability_remediation_effort): The same as Reliability remediation effort but on the code changed on new code.

Security

Issue types (bug, vulnerability, and code smell) are deprecated. Issues are now tied to Clean Code attributes and software qualities impacted. See Clean Code for more details.

Vulnerabilities (vulnerabilities): The number of vulnerability issues.

Vulnerabilities on new code (new_vulnerabilities): The number of new vulnerability issues.

Security Rating (security_rating) A = 0 Vulnerabilities B = at least 1 Minor Vulnerability C = at least 1 Major Vulnerability D = at least 1 Critical Vulnerability E = at least 1 Blocker Vulnerability

Security remediation effort (security_remediation_effort): The effort to fix all vulnerability issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days.

Security remediation effort on new code (new_security_remediation_effort): The same as Security remediation effort but on the code changed on New Code.

Security hotspots (security_hotspots): The number of Security Hotspots

Security hotspots on new code (new_security_hotspots): The number of new Security Hotspots on New Code.

Security review rating (security_review_rating): The security review rating is a letter grade based on the percentage of Reviewed Security Hotspots. Note that security hotspots are considered reviewed if they are marked as Acknowledged, Fixed or Safe.

A = >= 80% B = >= 70% and <80% C = >= 50% and <70% D = >= 30% and <50% E = < 30%

Security review rating on new code (new_security_review_rating): The security review rating for new code.

Security hotspots reviewed (security_hotspots_reviewed): The percentage of reviewed security hotspots. Ratio formula: Number of Reviewed Hotspots x 100 / (To_Review Hotspots + Reviewed Hotspots)

New Security Hotspots Reviewed: The percentage of reviewed security hotspots on new code.

Size

Classes (classes): The number of classes (including nested classes, interfaces, enums, and annotations).

Comment lines (comment_lines): The number of lines containing either comment or commented-out code.

Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.

The following piece of code contains 9 comment lines:

/**                                            +0 => empty comment line
 *                                             +0 => empty comment line
 * This is my documentation                    +1 => significant comment
 * although I don't                            +1 => significant comment
 * have much                                   +1 => significant comment
 * to say                                      +1 => significant comment
 *                                             +0 => empty comment line
 ***************************                   +0 => non-significant comment
 *                                             +0 => empty comment line
 * blabla...                                   +1 => significant comment
 */                                            +0 => empty comment line

/**                                            +0 => empty comment line
 * public String foo() {                       +1 => commented-out code
 *   System.out.println(message);              +1 => commented-out code
 *   return message;                           +1 => commented-out code
 * }                                           +1 => commented-out code
 */                                            +0 => empty comment line
Language-specific details

Comments (%) (comment_lines_density): The comment lines density = comment lines / (lines of code + comment lines) * 100

With such a formula:

  • 50% means that the number of lines of code equals the number of comment lines

  • 100% means that the file only contains comment lines

Directories (directories): The number of directories.

Files (files): The number of files.

Lines (lines): The number of physical lines (number of carriage returns).

Lines of code (ncloc): The number of physical lines that contain at least one character which is neither a whitespace nor a tabulation nor part of a comment.

Lines of code per language (ncloc_language_distribution): The non-commented lines of code distributed by language.

Functions (functions): The number of functions. Depending on the language, a function is defined as either a function, a method, or a paragraph.

Language-specific details

Projects (projects): The number of projects in a Portfolio.

Statements (statements): The number of statements.

Tests

Condition coverage (branch_coverage): On each line of code containing some boolean expressions, the condition coverage answers the following question: 'Has each boolean expression been evaluated both to true and to false?'. This is the density of possible conditions in flow control structures that have been followed during unit tests execution.

Condition coverage = (CT + CF) / (2*B) where:

  • CT = conditions that have been evaluated to 'true' at least once

  • CF = conditions that have been evaluated to 'false' at least once

  • B = total number of conditions

Condition coverage on new code (new_branch_coverage): This definition is identical to Condition coverage but is restricted to new/updated source code.

Condition coverage hits (branch_coverage_hits_data): A list of covered conditions.

Conditions by line (conditions_by_line): The number of conditions by line.

Covered conditions by line (covered_conditions_by_line): The number of covered conditions by line.

Coverage (coverage): A mix of Line coverage and Condition coverage. It's goal is to provide an even more accurate answer the question 'How much of the source code has been covered by the unit tests?'.

Coverage = (CT + CF + LC)/(2*B + EL) where:

  • CT = conditions that have been evaluated to 'true' at least once

  • CF = conditions that have been evaluated to 'false' at least once

  • LC = covered lines = linestocover - uncovered_lines

  • B = total number of conditions

  • EL = total number of executable lines (lines_to_cover)

Coverage on new code (new_coverage): This definition is identical to Coverage but is restricted to new/updated source code.

Line coverage (line_coverage): On a given line of code, Line coverage simply answers the question 'Has this line of code been executed during the execution of the unit tests?'. It is the density of covered lines by unit tests:

Line coverage = LC / EL where:

  • LC = covered lines (lines_to_cover - uncovered_lines)

  • EL = total number of executable lines (lines_to_cover)

Line coverage on new code (new_line_coverage): This definition is identical to Line coverage but restricted to new/updated source code.

Line coverage hits (coverage_line_hits_data): A list of covered lines.

Lines to cover (lines_to_cover): The number of lines of code that could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover).

Lines to cover on new code (new_lines_to_cover): This definition is Identical to Lines to cover but restricted to new/updated source code.

Skipped unit tests (skipped_tests): The number of skipped unit tests.

Uncovered conditions (uncovered_conditions): The number of conditions that are not covered by unit tests.

Uncovered conditions on new code (new_uncovered_conditions): This definition is identical to Uncovered conditions but restricted to new/updated source code.

Uncovered lines (uncovered_lines): The number of lines of code that are not covered by unit tests.

Uncovered lines on new code (new_uncovered_lines): This definition is identical to Uncovered lines but restricted to new/updated source code.

Unit tests (tests): The number of unit tests.

Unit tests duration (test_execution_time): The time required to execute all the unit tests.

Unit test errors (test_errors): The number of unit tests that have failed.

Unit test failures (test_failures): The number of unit tests that have failed with an unexpected exception.

Unit test success density (%) (test_success_density): Test success density = (Unit tests - (Unit test errors + Unit test failures)) / (Unit tests) * 100

Setting Up SonarQube in AutoRABIT

If you want to integrate all the functionality included in your SonarQube license with AutoRABIT, you need to integrate SonarQube as a plugin with your AutoRABIT account. However, it does require some steps in SonarQube as well as in your AutoRABIT account to get it configured.

Step 1: Generate a SonarQube Token

  1. Log in to your SonarQube instance.

  2. Go to User > My Account > Security. Your existing tokens are listed here, each with a Revoke button.

  3. The form at the bottom of the page allows you to generate new tokens. Once you click the Generate button, you will see the token value. Copy it immediately; once you dismiss the notification you will not be able to retrieve it.

  4. This token will be used while storing your credential with AutoRABIT.

Step 2: Store your SonarQube's credential in AutoRABIT

This is an initial step where your SonarQube credential such as username and password is stored in AutoRABIT.

  1. Log in to your AutoRABIT account.

  2. Hover your mouse over the Admin module and click on the Credentials tab.

  3. Next, click on Create Credential from the right navigation bar.

  4. On the next pop-up screen, give a Credential name.

  5. Choose the Credential Type as 'User name with Password'.

  6. Choose your Credential Scope

    • Global: Credential can be accessed within the team

    • Private: Credential for private usage

  7. Enter your SonarQube account's username. For password, use the copied token as mentioned in Step 1: Create a SonarQube Token

  8. Please double-check that you use your SonarQube username instead of the email address that you use to log in to SonarQube.

  9. Click Save.

Step 3: Integrate SonarQube with AutoRABIT

If you're logged out from your account, log in again into AutoRABIT with your credentials.

  1. Go to Admin > My Account section.

  2. Go to the Plugins section.

  3. Check the SonarQube checkbox under Static Code Analysis.

  4. Fill in the below details:

    • Enter the SonarQube hosted URL. For the SonarQube cloud version use https://sonarcloud.io

    • Choose the Host Type i.e., Cloud or On-premise. For SonarQube hosted on Cloud, you need to add the Organization Key.

    • Select your Credential from the drop-down.

    • Click Test Connection to check if the connection has been authenticated or not. A success message is displayed after the authentication is completed.

    • Click Save.

  5. Click on Save once again and you are all set with SonarQube integration.

Step 4: Setting SonarQube Global Criteria Settings

You can now set the global Quality Gate criteria to enforce SonarQube Static code analysis tool across CI Jobs, Deployment, and gated Commits. The Quality Gate gives you a Pass or Fail rating for your project in the SonarQube tool depending on the metrics you have provided. Based on the criteria configured in AutoRABIT and if it matches in your SonarQube account, the process gets aborted.

  1. Go to Admin > My Account section.

  2. Next, navigate to the Validation Criteria-Static Code Analysis section.

  3. Select the Enable checkbox.

  4. Enable the SonarQube checkbox and assign the Quality Gate status for all your projects. By default, it is set to ERROR, however, you can choose the criteria of your own. If the Quality Gate matches with the status assigned to the projects on your SonarQube tool, the validation process gets failed and the build aborts.

  5. Click Save.

  6. Next, go to the next section i.e., Commit Validation - Approval Settings. In this section, you can allow the SonarQube tool to identifying potential software quality issues before the code moves to production and abort the commit process if the Quality Gate set earlier matches with the status in the SonarQube application.

  7. Select the checkbox: Enable criteria based Review Process

  8. Enable the Should pass validation criteria for Static Code Analysis checkbox, select the below checkboxes:

    • SonarQube

    • Auto reject commit process if the criteria are not met

  9. Click Save.

  10. Similar to SonarQube criteria globally configured in AutoRABIT for Commit operation, you can even set the same for Merge Process. Go to the next section: Merge Settings

  11. Select the Enable criteria-based Review Process checkbox.

  12. Under Should pass validation criteria for Static Code Analysis, select the SonarQube checkbox.

  13. Finally, click on Save.

Last updated