Quantcast
Channel: Gurock Software Support Forum
Viewing all 829 articles
Browse latest View live

Ability to search for blank custom multi-select field?

$
0
0

Hi Dennis and Test Rail Team!
  In TestRail (4.0.1 -- yes, we will upgrade soon), I created a custom, non-required, multi-select field.
Most are using it, some have not and therefore this field is blank in those instances.
  I would like to filter on those test cases where this field is blank.  I see 'Any' is in the selectable filter list that Test Rail inserts, but a blank/empty field item is not.

  Is there an approach I am missing where I can filter for blank fields in a custom, multi-select field?

Thanks for any time you give to this,

-- Tom


[API][Python] Initial setup, help needed.

$
0
0

Hi all,

I'm having some difficulty setting up the basic API test with Python. I may be missing something obvious, but I'd thought I'd post on here in case the result helps somebody else..

I'm running with Python 3.4, located "C:\Python34".
I've downloaded the Python TestRail API bindings (http://docs.gurock.com/testrail-api2/bi … stallation) and have extracted "testrail-api-master\python\3.x\testrail.py" to "C:\Python34".

I've taken the following example from (http://docs.gurock.com/testrail-api2/bi … stallation) and saved it as "TEST.py" in "C:\Python\Tests"

from testrail import *

client = APIClient('http://<URL>/testrail/')
client.user = '<UN>'
client.password = '<PW>'
case = client.send_get('get_case/1')
pprint(case)

.. With the URL, Password and Username replaced and relevant to my installation, when I attempt to execute "TEST.py" in cmd the following error is returned. Any ideas why?

C:\Python34\Tests>TEST.py
Traceback (most recent call last):
  File "C:\Python34\Tests\TEST.py", line 6, in <module>
    case = client.send_get('get_case/1')
  File "C:\Python34\testrail.py", line 36, in send_get
    return self.__send_request('GET', uri, None)
  File "C:\Python34\testrail.py", line 76, in __send_request
    result = json.loads(response.decode())
  File "C:\Python34\lib\json\__init__.py", line 318, in loads
    return _default_decoder.decode(s)
  File "C:\Python34\lib\json\decoder.py", line 343, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Python34\lib\json\decoder.py", line 361, in raw_decode
    raise ValueError(errmsg("Expecting value", s, err.value)) from None
ValueError: Expecting value: line 1 column 1 (char 0)

(I have existing test cases which should be returned)
(The user is active)
(API is enabled)

Refresh TestRuns

$
0
0

Hello,
is it possible to refresh a TestRun Page automatically in the Client-Browser? How can I set this up?
The problem is, that new Changes in the TestRun are not published to the rest of the Test-Crew, they have to refresh their page manually.

Many Thanks

Test Case Delete Stats

$
0
0

Hi,
  How can i get a number of test cases deleted from Testrail similar to Activity summary for updated and created test cases ?


Thanks,
Ramya

Drop down list on Run page

$
0
0

I'd like to put a javascript drop-down choice list on the Run page to allow users to choose the browser to run the automation in.  Your sample code contains this:

        /* Create the button */
        var button = $('<div class="toolbar content-header-toolbar"><a class="toolbar-button toolbar-button-last toolbar-button-first content-header-button button-start" href="javascript:void(0)">Run Automated Tests</a></div>');

Which works great to create a button which is then tied to an action.

Thing is, I'm not a javascript web developer, so I don't yet understand how to add a drop down list to the same page using a similar approach.  The html for a drop-down list is totally understandable, but I'm not clear as to where that would go in the UI script.

<select>
  <option value="ie">Internet Explorer</option>
  <option value="chrome">Chrome</option>
</select>

Huge thanks in advance for any hints.  I'm wondering especially where the class names you use come from.  Are there particular class names to format a select element?

-Kent

Feature Request: Enable partial seconds for elapsed time

$
0
0

I have a lot of unit tests and automated tests where the execute time is reported in milliseconds. I can convert the elapsed time to seconds (e.g. 1579ms -> 1.579s) before I add the result to TestRail. However, the Elapsed time field does not allow for partial seconds. If I add 1.579s to the Elapsed time field when adding a test result, TestRail sets the elapsed time to 9m 39s. Can the Elapsed time field be updated to allow partial seconds to be added?

Multiple iterations of a testcase in a plan

$
0
0

Hi is it possible to run separate multiple iterations of the same test case within a test plan, without having to duplicate the test in the suite.

I know I can run the test again over the top of the last one. And in a pinch use configurations. however I would prefer to have just one test case I can add to the same test plan multiple times to record multiple results.

Maximum filesize or maximum testcases for an XML import?

$
0
0

I'm trying to import a rather largish XML file (13mb) but keep getting an error when I try:

(bah, how to I upload images to the forum?) ... here's what is says in a popupish window:


"An error occurred during the last operation or your installation is currently in maintenance mode.  Please try again or refresh the current page."
(With an "OK" button.)

If I bring the size of the file down (and hence, the number of testcases) I can do a successful import.

Is there a size or testcase limit per import?

Thanks.


[FR] Return to top level in Test Plan after case completion

$
0
0

Hi,

Feature request (if it hasn't been requested before).  When running a test case in a suite that's inside a Test Plan, once completed (pass/fail/whatever) I'd like to be brought back to the top level of the test run of the plan I am currently executing.  Meaning I am in a section of Test Plan - Settings and completed a test case, upon pass/fail/whatever selected I am brought back out of the case and to the Settings plan level. Saves clicking/time ;-)

I would have thought that would be normal operation.

Firefox 36.0.4 report viewing stinks (sorry - but it's the truth)

$
0
0

The viewing of a run for all my test plans/suites displays rather bad in Firefox 36.0.4. I opened the index.html file in Chrome and it looks just fine, chart, text, etc...all looks as I would expect it to. However, in FF version noted it looks bad, scrunched up, no graph displayed/visible, etc...Is this a known issue?

Mike

Reporting feature requests

$
0
0

Hey guys,

TestRail, for the most part, is a testing dream come true for me!  But, one of the most important parts of testing is letting product managers and other business people know the results of test passes.  Of course, this is where reporting comes in.

Unfortunately, the reporting built-in to TestRail, though comprehensive, is nigh unusable.  The reason lies within the available methods of delivery.  I'll go over where the problems lie:

- Email a link to the report.
1.  If we're going to bother sending an e-mail about a test report, the e-mail should contain the content of the test report, rather than just a link to view the report in Test Rail.  This would be a minor problem, but for:
2.  In order to view a TestRail report, one must have a TestRail account.  We don't want to use up TestRail licenses just so a VP can glance at a report once in a while.  It would be great if there were a way to either make free, view-report-only accounts, or to make reports generally public.

- Email the report as an attachment.
The attachment, as you know, comes in the form of a zip archive of a web page.  This presents several problems.
1.  Security.  Most people (hopefully!) are fairly guarded when they receive an e-mail with an attachment -- there's always the concern that "Your test result is attached" is social engineering.  This could be overcome with some education of the intended audience, but the tool should accommodate the audience -- not vice-versa.
2.  Anti-virus.  My corporation's e-mail server strips the javascript files out of the attachments, since .js files in an e-mail could be malicious scripts.  When opening the webpage, this makes the pie charts, among other things, disappear.  We also get e-mails from the e-mail server that the .js files were stripped from that e-mail.  This isn't a good experience for our intended audience, which could go as high as VPs. 
3.  Getting a non-technical business person to unzip a file and open index.html.  As above, busy folks don't want to take the time to pursue a test report.  They certainly don't want to find where the report unzipped to, and look through some obscure-sounding filenames for index.html.

Ultimately, we need reports in the body of e-mail messages.  I understand that this is not currently done because the Outlook HTML rendering engine is based on Word, and the reports don't show up with the active content of the website reports, and also some goofy rendering.  At a minimum, I'd like even a very simplified report, using basic tables and some nice fonts, where possible.  The active content, while nice, is not necessary for viewing in an e-mail. 

I will probably end up writing my own reporter to accomplish this, though I'd greatly prefer a turnkey solution.

Your work is otherwise amazing!  This is a fantastic tool, and I pushed hard on execs up to the VP level to get its purchase approved for the company.

Server side scripts for automation

$
0
0

Hi,
I was reading and trying out the automated test runs referring to this page - http://docs.gurock.com/testrail-custom/ … ated_tests

I could do with adding the "Start Tests" button. However, I am a bit confused about where to put the server side PHP script. The page says - The trigger script (trigger.php) needs to be placed into TestRail's installation directory on the web server next to TestRail's index.php file.

I am not exactly clear where to copy this file. I am using a testrail hosted trial (which I soon plan to convert to subscription).

Do I need my own webserver?
If yes then I guess it will have to be allowed by the firewall. My PC is behind the company's firewall and can't have a public web server running without an authenticated log-in.

Can someone please guide?

Thanks,

UI script to hide all test runs which don't have a 100% pass rate

$
0
0

Some of our automated test plans contain a large number of test runs (> 1000).

This UI script adds a button to the ^plan/view page which hides all the test runs which don't have a 100% pass rate.

This is useful if you've got a test plan with lots of test runs and only a few tests in a non-passed state (running a report is quite a long process if you only want a quick view of non-passed tests).

Works with TestRail version    4.0.4.3277

name: Hide 100% Passed test runs
description: Hide 100% Passed test runs
author: Matt Horrocks, Dell.
version: 1.0
includes: ^plans/view
excludes: 

js:
$(document).ready(
    function() {
        $("<a class='show-attention-required toolbar-button content-header-button button-view'>Show attention required</a>").insertBefore('.toolbar.content-header-toolbar a:last-child');
        $(".show-attention-required").click(
            function(e)
            {
                e.preventDefault();
                if ($(this).text() == "Show all"){
                    $(".chart-minibar-percent").each(function( index ) {
                            $(this).closest('tr').show();
                    });
                    $(".show-attention-required").text("Show attention required");
                } else if ($(this).text() == "Show attention required"){
                    $(".chart-minibar-percent").each(function( index ) {
                        if ($(this).html().indexOf("100%") >= 0){
                            $(this).closest('tr').hide();
                        }
                    });
                    $(".show-attention-required").text("Show all");
                }
            }
        );
    }
)

UI script to hide all test runs which have a 100% pass rate

$
0
0

Some of our automated test plans contain a large number of test runs (> 1000).

This UI script adds a button to the ^plan/view page which hides all the test runs which have a 100% pass rate.

This is useful if you've got a test plan with lots of test runs at 100% passed and only a few tests in a non-passed state (running a report is quite a long process if you only want a quick view of non-passed tests).

Works with TestRail version    4.0.4.3277

name: Hide 100% Passed test runs
description: Hide 100% Passed test runs
author: Matt Horrocks, Dell.
version: 1.0
includes: ^plans/view
excludes: 

js:
$(document).ready(
    function() {
        $("<a class='show-attention-required toolbar-button content-header-button button-view'>Show attention required</a>").insertBefore('.toolbar.content-header-toolbar a:last-child');
        $(".show-attention-required").click(
            function(e)
            {
                e.preventDefault();
                if ($(this).text() == "Show all"){
                    $(".chart-minibar-percent").each(function( index ) {
                            $(this).closest('tr').show();
                    });
                    $(".show-attention-required").text("Show attention required");
                } else if ($(this).text() == "Show attention required"){
                    $(".chart-minibar-percent").each(function( index ) {
                        if ($(this).html().indexOf("100%") >= 0){
                            $(this).closest('tr').hide();
                        }
                    });
                    $(".show-attention-required").text("Show all");
                }
            }
        );
    }
)

How to copy Test Run

$
0
0

Hello.
I have test plan with few test run with specific configurations and manually selected test cases.
At some point in time we need to duplicate Test Run with selected configurations and test cases.
How can we do it? Creating it manually again is quite inconvenient. It would be nice to have Duplicate button.


Feature request: Test Run screen for all configurations

$
0
0

Hi.
We have Test Plan with many Test Runs and Configurations. One Test Run for a Sprint (configurations for different platforms).
We can see overall status for the Test Plan or status of Test Run with specific Configuration but can't see status of Test Run with all configurations as there is no such link.
I suggest you add such link and related page with details.

Feature request: Default field values for Section

$
0
0

Hi.
We are creating test cases from top to bottom:
- First area/subareas are defined
- Then we have specific change request for that area. Change request has set of target platforms, user story, acceptance criteria. We create section for that change request
- After that we define test cases under the section - simply type titles
- And finally every test case is defined in more detailed steps, expected results, test data, etc.
With this approach in 99% of time all test cases in the section has the same field values like Platform, Owner, Milestone, References, Status, Priority, Type. So it would be more convenient to define all of these fields values at the section level and automatically have them in all new test cases under Section.

[API] custom_steps_separated does not update

$
0
0

Hi,
I am trying to add test case from Python using the following code.

title='Test case from CSV - 5'
typeID = 2
priority=2
estimate=20
case = client.send_post('add_case/2', {'title':title, 'type_id':typeID, 'priority_id':priority, 'estimate':estimate, 'mileston_id':1, 'refs':'Excel',
                                       "custom_preconds": "These are the preconditions for a test case",
                                       'custom_steps_separated':[{'content': "Step 1",'expected': "Expected Result 1"},{'content': "Step 2",'expected': "Expected Result 2"}]})

However, when I execute this, it creates a new test case and adds all the details except the custom_steps_separated.
I have the custom_steps_separated enabled in adminsitration and I am already able to create custom_steps_separated manually or even doing a CSV upload.

Can anyone please point me if I am doing anything wrong here?

Thanks,
Gopal

How to customize test case ID and test run ID?

$
0
0

Hi,

Today I was trying to edit/customize above mentioned IDs but failed. This may be due to I'm not familiar with the tool, however, to customize the case IDs is important for us to manage our project.
Any solution to modify those IDs? Thanks in advance!

How to make Suite name show up in github integration

$
0
0

I have my test cases for a project broken up into Test Suites. 
When I push a defect (test result) to github, using the github integration, I want the Test Suite to show.
It doesn't appear to be a predefined field that I can select, but I would have expected it to be predefined. 
My github integration is working very well, other than this one small thing I need to address.
Can you please provide some assistance?
Thanks!
Del

Viewing all 829 articles
Browse latest View live