Sponsored
    Follow Us:

Case Law Details

Case Name : In re Principal Commissioner of Central Tax Vs. NCS Pearson Inc (GST AAAR Karnataka)
Appeal Number : Order No. KAR/AAAR/07/2020-21
Date of Judgement/Order : 13/11/2020
Related Assessment Year :
Become a Premium member to Download. If you are already a Premium member, Login here to access.
Sponsored

In re Principal Commissioner of Central Tax Vs. NCS Pearson Inc (GST AAAR Karnataka)

The Appellate Authority Allow the Appeal filed by the Principal Commissioner of Central Tax, Bangalore West Commissionerate and set aside the ruling give by the Authority for Advance Ruling in KAR ADRG 37/2020 dated 22-05-2020 with regard to the classification of the Type-3 test. The Appellate Authority hold that service provided for the Type-3 test is classifiable as an OIDAR service. The appeal filed by the Department is disposed off on the above terms.

We find from the information furnished by the Respondent that their activity is primarily to conduct computer-based tests for their clients. The type of computer-based test i.e whether the test is purely multiple-choice questions or a mix of multiple-choice and essay questions depends on the purpose of the test and what the test sponsor aims to measure in a test taker. We are concerned only with the Type-3 test which is a mix of multiple-choice questions and essay-based questions. It is the responsibility of the Respondent to provide the software to enable the candidates to take the online Type-3 test; appoint or establish test centres from where the candidates will take the online Type-3 test; provide for the candidate’s test registration validation at the test centre; provide for online and offline proctoring during the test-taking process, provide software for scoring the tests and deliver the test results electronically to the candidate. From the above it is abundantly clear that the Type-3 test is conducted over the internet using a computer system. The process of the test registration, conduct of the test and communication of the result are automated and such a test will not be possible in the absence of information technology. Thus, three out of the four requirements of an OIDAR service are fulfilled. The bone of contention is with regard to the fourth ingredient which is that the service should have minimum human intervention. We find that the lower Authority has taken the view that the scoring by the human scorer for the essay-based responses in the Type-3 test renders the element of human intervention more than minimal thereby disqualifying it as an OIDAR service. The Respondent has also taken the same line of defence before us.

There is no dispute on the fact that there is an element of human intervention involved in the process of scoring the essay responses in the Type-3 test. What needs to be decided is whether the extent of human intervention is ”minimum’ or not. Since there are no guidelines in Indian laws regarding the concept of minimum human intervention in electronically provided services, we refer to the European Commission VAT Committee Working Paper No 896 wherein the notion of ‘minimal human intervention’ was discussed in the context of determining whether or not a service can be said to fall within the definition of electronically supplied services. The European VAT Committee had agreed that for the assessment of the notion of’minimal human intervention’, it is the involvement on the side of the supplier which is relevant and not that on the side of the customer. We have already detailed the entire process involved in conducting the Type-3 test and it is seen that scoring by a human scorer is just one of the processes involved in a computer-based test. One of the major benefits of a computer-based test is the facility of obtaining immediate grading. While grading of multiple-choice questions is done instantaneously using an algorithm, grading of essays involves the use of AES (Automated Essay Scoring) which is a specialized computer program to assign grades to essays. The Respondent has an entity in the United States which has developed an AES for reliable scoring of essay responses in a computer-based test. How does one know that the automatic scoring system works well enough to give scores consistent with consensus scores from human scorers? Any method of assessment must be judged on validity, fairness and reliability. An AES would be considered valid if it measures the trait that it purports to measure and it would be considered reliable if its outcome is repeatable. Before computers entered the picture, essays were typically given scores by two trained human raters. If the scores differed by more than one point, a more experienced third rater would settle the disagreement. In this system, reliability was measured by the degree of agreement among the human raters. The same principle applies to measuring a computer program’s performance in scoring essays. An essay is given to a human scorer as well as to the AES program. If the AES score agrees with the score given by the human scorer, the AES program is considered reliable. A machine-human score correlation serves as a good indicator whether the AES is returning a stable consensus score of the essay. Therefore, the role of the human scorer is in effect a means to ensure the reliability of the AES program. We do not discredit the importance of a human scorer in the process of assessment of essay responses. However, the focus here is on a computer-based test where the intent is to also assess the performance of the candidate using an automated system. The reliability of the AES is validated by the near agreement to the score given by the human scorer. For this reason, we hold that the involvement of the human element in the assessment of essay responses is well within the realm of ‘minimum human intervention’. Further, even from the perspective of the candidate, the human involvement is minimum in the entire process of the Type-3 computer-based test starting from the manner of registering for the test, the actual test-process and the outcome of the test, as all stages are automated. No doubt at times the candidate seeks a revaluation or rescoring of their essay responses and such revaluation task is given to a human scorer. However, even in such cases there is no direct human interaction of individualistic nature between the evaluator and the candidate. The Respondent accepts the electronic request for a rescore of the essay and returns the result to the candidate electronically. The candidate who is the service receiver has received a fully digitally provided service. When the Type-3 computer-based test is viewed as a whole, the scoring done by the human scorer is to be regarded as being within the realm of minimum human intervention. As such the ingredient of’minimum human intervention’ required to classify the service as OIDAR is also satisfied. We therefore, disagree with the decision of the lower Authority that the Type-3 test is not an OIDAR service.

We allow the appeal filed by the Principal Commissioner of Central Tax, Bangalore West Commissionerate and set aside the ruling given by the Authority for Advance Ruling in KAR ADRG 37/2020 dated 22nd May 2020 with regard to the classification of the Type-3 test.

Please become a Premium member. If you are already a Premium member, login here to access the full content.

Sponsored

Join Taxguru’s Network for Latest updates on Income Tax, GST, Company Law, Corporate Laws and other related subjects.

Leave a Comment

Your email address will not be published. Required fields are marked *

Sponsored
Sponsored
Sponsored
Search Post by Date
August 2024
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031