Graduate level.

Perfect grammar.

APA format.

Research the topic and then present the research.

Similarity report must be below 20%.

International Journal for the Scholarship of Teaching and Learning

http://academics.georgiasouthern.edu/ijsotl/v6n2.html

Vol. 6, No. 2 (July 2012)

ISSN 1931-4744 @ Georgia Southern University

1

Improving the Development of Students’ Research Questions and Hypotheses

In an Introductory Business Research Methods Course

Laurie Strangman

University of Wisconsin-La Crosse

La Crosse, Wisconsin, USA

[email protected]

Elizabeth Knowles

University of Wisconsin-La Crosse

La Crosse, Wisconsin, USA

[email protected]

Abstract

In an introductory research methods course, students often develop research questions and

hypotheses that are vague or confusing, do not contain measurable concepts, and are too

narrow in scope or vision. Because of this, the final research projects often fail to provide

useful information or address the overall research problem. A Lesson Study approach was

used to develop a new lesson that models the development of research questions and

hypotheses and provides multiple opportunities for students to practice this skill. Two tools

were also developed to help students navigate this process, and the learning outcomes of

the lesson were clearly defined. To assess the effectiveness of this lesson 122 research

proposals generated by student research teams before and after implementation of the new

lesson were evaluated using a grading rubric based on the learning outcomes. There were

statistically significant improvements in three of the five learning outcomes.

Keywords: lesson study, teaching research methods

Introduction

Many disciplines such as psychology, sociology and business include an introductory

research methods class as part of the core curriculum of their program, although the

approach to teaching the course can vary widely. Research methods may be taught as an

exercise in addressing hypothetical situations or by studying research cases. Alternatively,

portions of the research process can be isolated for students to practice; for example,

students may write a research proposal or design a survey. McBurney (1995) employs a

problem method where students are given a set of scenarios to analyze using a variety of

research methods. Even when a project-based approach is taken, where a student

completes a research project from beginning to end, there are still elements of the course

curriculum that may vary. Much of the difference between approaches is dependent on

how the project is started: faculty may assign the research questions to study, or students

may develop the research questions themselves. Aguado (2009) guides the development of

research questions by providing general topics from which to choose, and in this way sides

with “control over choice” (p. 253). Alternatively, Longmore, et al, note that when students

International Journal for the Scholarship of Teaching and Learning

http://academics.georgiasouthern.edu/ijsotl/v6n2.html

Vol. 6, No. 2 (July 2012)

ISSN 1931-4744 @ Georgia Southern University

2

choose their own topics, motivation and quality tend to improve (1996). However, this

approach presents some unique challenges. Students may think that the choice of a topic

is the same as defining the broad business or research problem and/or their projects lack

focus.

At the University of Wisconsin-La Crosse, business majors are required to take an

introductory research methods course entitled “Business and Economics Research and

Communication”. Upon entering the course, all students have completed an introductory

statistics prerequisite and the typical student is either a second semester sophomore or a

first semester junior. Over the course of the semester, students complete a research

project in groups by collecting and analyzing primary data. The final project is presented

in both written and oral form at the end of the semester. The problems that students

research are self-chosen and reflect either basic research on attitudes and behaviors or

address an applied problem. For example, projects have been completed for local

businesses, for governmental units, and for organizations and departments on campus.

There are five broad common learning objectives associated with the course: (1) Develop

the ability to define a research problem; (2) Recognize and use the appropriate techniques

to collect data to address a research problem; (3) Interpret data using statistical analysis;

(4) Develop the ability to effectively communicate research results both written and orally;

and (5) Develop the ability to critically evaluate limitations, errors, and biases in research.

How to help students achieve the first of these objectives is explored in this research.

The difficulty of teaching an introductory research methods class is well-recognized

(Denham, 1997; Markham, 1991). Denham notes, “Research methods may be the most

difficult course to teach at the undergraduate level” (1997). The complex nature of the

course presents several challenges. It is difficult to lead students through a process of

answering a question that may not be well defined and for which there are multiple research

approaches (McBurney, 1995). McBurney notes that “Students tend to become anxious

and sometimes dispirited when an instructor refuses to tell them the right answer” (1995).

In addition to the abstract nature of research, students often have had little or no exposure

to conducting research, or the thought processes involved. Markham notes that “High

school work had not prepared students to think in terms of variables or hypotheses, and

very few students had taken enough laboratory science or mathematics to allow much

transfer of learning” (1991). Aguado notes that the deficiency in skills for conducting

empirical research is present at both the undergraduate and graduate levels (2009).

This lack of skills is magnified because the research process is best learned by doing.

Evidence of these challenges appears early in the semester, as students try to develop

research questions that will address the overall business problem. While experienced

researchers anticipate the challenges with this first step, students seem to encounter a

particularly substantial hurdle in developing the initial direction of their research. Even

when a research topic is chosen, they do not necessarily see how to frame their research

questions and hypotheses (Ransford & Butler, 1982). When evaluating student work, we

noticed that students often developed research questions and hypotheses that were vague

or confusing in terms of language, did not contain measurable concepts, and were either too

narrow or too broad in scope to generate valuable conclusions. Because of this weak start,

data collection was often haphazard, with students realizing too late in the process that they

wanted to learn something different than what the data would reveal to them. When this

International Journal for the Scholarship of Teaching and Learning

http://academics.georgiasouthern.edu/ijsotl/v6n2.html

Vol. 6, No. 2 (July 2012)

ISSN 1931-4744 @ Georgia Southern University

3

occurs, the final research project provides neither useful information to address the overall

research problem nor the information a decision maker requires to act upon.

There are several reasons why students struggle to produce research questions and

hypotheses. First, problem definition is an abstract process. This produces a challenge for

students because the mind prefers concrete knowledge (Willingham, 2009). Second,

students understand new ideas and concepts by building on what they already know,

specifically by seeing relationships with and making connections to knowledge they already

possess. However, outside of taking surveys, students have very limited experience with

research activities, so there is little about the process of defining a problem that is familiar

to them. With little foundation on which to build, students often leave the classroom with

a shallow understanding of the process of problem definition and knowledge that is only

tied to the specific examples or context offered in class (Abrose et al, 2010).

Because their knowledge is shallow, students will have difficulty generalizing the information

contained in a specific example and applying it to a completely new business problem.

According to Willingham (2009), “We understand new things in the context of things we

already know, and most of what we know is concrete. Thus it is difficult to comprehend

abstract ideas, and difficult to apply them in new situations” (p.88). Van Gelder (2001) also

notes that transfer of skills is a challenge, as skills developed in one context may not carry

over to other situations. When presented with new ideas or concepts that are abstract in

nature, students tend to focus on the more concrete surface details of examples without

seeing the underlying structure of the problem.

To address the difficulties associated with transfer, Willingham (2009) suggests that

instructors provide students with several different examples and that these examples be

compared to one another. Ambrose et al. (2010) also note that “structured comparisons”,

which involves comparing and contrasting different examples, problems, or scenarios, have

been shown to aid in transfer (p. 110).

Once examples have been provided, students need multiple opportunities to practice using

new knowledge and skills (Willingham, 2009). Practice is the only way to become proficient

at any new skill, and it is practice and experience that separate the novice from the expert.

Like the examples offered in class, this practice should expose students to a variety of

situations. The examples need to provide students with the opportunity to practice transfer

itself, by applying concepts to new contexts (van Gelder, 2001).

Ambrose et al. (2010) further suggest breaking an abstract process down into its

component parts and offering students the opportunity to practice each of these component

skills individually. As students become more proficient at the individual pieces of the

research process this frees up space in working memory for higher level thinking. “Thus,

with practice, students gain greater fluency in executing individual sub skills and will be

better prepared to tackle the complexity of multiple tasks” (p. 105).

While much of the literature about teaching an introductory research methods class

considers the overall research process, there has been little emphasis on the first step: how

to help students develop the ability to define a research problem. Yet this aspect is so

crucial to the success of the students’ research project that it cannot be ignored. The

purpose of this project was to create a lesson that would help improve students’

development of research questions and hypotheses. A Lesson Study approach was used as

http://academics.georgiasouthern.edu/ijsotl/v6n2.html

Vol. 6, No. 2 (July 2012)

ISSN 1931-4744 @ Georgia Southern University

4

the basis for our exploration. The process of Lesson Study involves a small group of faculty

who collaborate to plan, teach, observe, revise and report on a specific class lesson. A

backward design approach is used where faculty start by clarifying the goal of the learning

process, and then work to design instructional experiences that achieve the goal (Cerbin &

Kopp, n.d.). Emphasis is placed on making student learning visible in order to identify gaps

in understanding.

Lesson Development

Defining Outcomes

The first step in helping students learn how to develop their research questions and

hypotheses was to more clearly define the outcomes or expectations. We collaboratively

identified the six most important characteristics of a well-defined business problem in order

to provide a solid foundation on which to build the research project. These characteristics

are: (1) the scope or vision of the proposal encompasses the relevant variables; (2) the

information is useful for decision making or addressing the overall problem; (3) the

research questions are well defined; (4) the research hypotheses are well defined; (5) the

research hypotheses are measurable; and (6) the research questions and hypotheses are

directly related. These characteristics were refined as the lesson was developed, and

eventually became the basis of a grading rubric for the student research proposals that

were evaluated in this study. See Appendix A for the sample rubric.

Modeling the Process of Problem Definition and the Use of Learning Tools

Prior to the development of this lesson, we simply lectured about the importance of defining

the research problem but provided no opportunity to practice the process until students

considered their own research proposals. We developed the lesson to systematically

address this deficiency through a three-day unit that modeled the process of problem

definition for students based on the learning outcomes. The new lesson involved activities,

prompts, and tools that specifically addressed the challenges students faced. Since

research suggests that practice is key to mastering concepts, practice problems were

designed to structure the practice with varying degrees of instructor assistance before

independently developing research questions and hypotheses for their own research

projects. By giving students clear learning objectives as well as multiple opportunities to

work with the process in the classroom, it was our expectation that students would develop

the skills necessary to create research questions and hypotheses.

On the first day of the lesson, a business problem was posed, “How could the university

increase the number of applications?” We began by modeling a brainstorming process for

students. The intent of this was to demonstrate how researchers explore a problem and

consider which variables are relevant. Specifically, this activity addressed the issue of

“scope” in problem definition by encouraging students to think more broadly than they may

have otherwise. A set of prompts was used to help students consider multiple dimensions

of the problem. For example, students were asked to consider who the decision maker

was, who the stakeholders were, and what they would need to know to answer the problem.

Eventually eight prompts or questions were developed to help students consider multiple

aspects of the research problem. The full set of prompts can be found in Appendix B. All

of the students’ ideas about relevant variables were recorded on the board, without any

filtering or evaluation. What evolved was a question map that helped students visualize

http://academics.georgiasouthern.edu/ijsotl/v6n2.html

Vol. 6, No. 2 (July 2012)

ISSN 1931-4744 @ Georgia Southern University

5

the brainstorming process that researchers often use to identify important information to

address a research problem. After the initial brainstorming session, students connected

related ideas and eliminated ideas that were not useful in addressing the overall problem

or for which they could not collect data.

After students had no further comments about the question map, they were prompted to

consider the large themes that surfaced from the brainstorming. These themes became

the basis for research questions that addressed the overall business problem. To clarify

how research questions and hypotheses fit together, and to demonstrate how they must

relate to the overall problem and the end use of the research, a second tool was developed:

the problem definition table. The table visually displays the overall research problem as

overarching to the specific research questions. Multiple prompts for research questions

help develop scope by reminding students that there is more than one avenue to pursue

when exploring the overall problem. The progression from question to hypothesis conveys

the direct connection between the two elements. Finally, prompting students to consider

how the information would be used reminds them that relevance matters. If the

information generated from a research question cannot be used to answer the overall

business problem then it should be replaced by a more appropriate research question.

Students were encouraged to complete this circular process for each idea developed out of

the question map. The template for the problem definition table can be found in Appendix C.

In the second day of the lesson, students were presented with a new problem to address,

“Should the Wisconsin legislature pass a law prohibiting the use of all cell phones while

driving?” They were then instructed to follow the steps previously demonstrated: generate

a question map, evaluate the ideas in the question map, and use the themes to complete a

problem definition table. The discussion was quite lively as students had a well-structured

approach to use as they developed research questions and hypotheses. The completion of

the process begins to build confidence that they can define research questions on their own.

The research questions that students generated from this second example were compiled

and reviewed with the entire class with respect to the learning outcomes. In this way, the

strengths and weaknesses of their questions and hypotheses could be discussed.

On the third day of the lesson, students engaged in the same process as they explored their

individual team’s research problem. Allowing class time to take this step encouraged

students to thoughtfully consider how to transform a research topic into research questions.

One of the key findings of the lesson study approach was the recognition that our initial

lesson design did not model a variety of types of research questions. Students had a

tendency to get “stuck” with language that they knew was measurable. For example, their

research questions all began with, “what is the most important….?” As the lesson

developed, this was deliberately addressed by presenting overall problems that would

prompt different types of research questions that students might analyze with univariate

and bi-variate statistics (the focus of the statistical coverage in the introductory course).

Since the course emphasized the evaluation of quantitative measures as opposed to

qualitative measures, these included testing the value of a mean or proportion, making a

comparison between groups, or evaluating which value is highest or lowest.

Methods

http://academics.georgiasouthern.edu/ijsotl/v6n2.html

Vol. 6, No. 2 (July 2012)

ISSN 1931-4744 @ Georgia Southern University

6

Procedures

Over the course of five semesters, two instructors of the course collected the first drafts of

all 122 student research proposals submitted. The proposals included a statement of the

overall research problem, as well as the research questions and hypotheses that the

students had developed. Fifty-one of these proposals were written in semesters prior to the

implementation of the new lesson and seventy-one were developed after. The proposals

were randomly ordered and numbers were assigned to each so that the semester in which

they were completed could not be identified. This process was used in order to minimize

any bias introduced by the desire of the instructors to see improvement.

The two instructors jointly reviewed each of the research proposals while physically sitting

together. The proposals were evaluated on the basis of the six previously identified

characteristics of research questions and hypotheses. Specifically, a rubric was developed

which rated each of these characteristics on a scale of one to five depending upon the

degree to which the research questions and hypotheses met the individual characteristic,

with 1 being not met at all and 5 being completely or fully met. As each proposal was read,

we compared the scores we assigned to each of the characteristics. If there was

disagreement, the rationale that led us to that score was discussed until we reached

agreement on a single score.

To determine if there was bias due to systematic differences in the quality of students over

the course of the study, we compared characteristics of the students in the two semesters

prior to the changes in the lesson to characteristics of the students enrolled in the class

after the changes were made. This was a concern since the implementation of stricter

admission requirements at the university made it possible that the pool of students in the

research methods class was improving over time. We tested for statistical differences in

cumulative GPA, math ACT score, composite ACT score, high school class rank and gender

between the two groups of students. The results of independent samples t-tests indicate

that there was no difference in the student characteristics before the new lesson and after

(see Table 1).

Table 1. Student Characteristics In Semesters Prior to and After the Implementation of the New

Lesson

Student

Characteristic:

Mean of Students

Evaluated Prior to New

Lesson

Mean of Students

Evaluated After

New Lesson

Cumulative GPA 3.103 (.458) 3.096 (.443)

High School Class Rank 35.44 (36.896) 35.59 (46.308)

ACT Math Score 25.27 (3.528) 25.57 (3.372)

ACT Composite Score 24.52 (2.825) 24.93 (2.669)

Gender (Proportion of Male

Students)

.54 (.500)

.56 (.497)

Notes: Standard deviations are reported in parentheses. p > .10 for all pairs

It is important to note that the process used to select the student research teams was the

same in all five semesters considered. Students selected their own teams using a process

similar to “speed dating”. They were initially divided into groups of approximately four

http://academics.georgiasouthern.edu/ijsotl/v6n2.html

Vol. 6, No. 2 (July 2012)

ISSN 1931-4744 @ Georgia Southern University

7

students and given six minutes to introduce themselves to one another and to elicit

information from the others in the group that would help them in making the best matches

possible. At the end of the six minutes, they moved to another group of unique students to

repeat this process. After several interview rounds the students then self-selected their

teams for the semester.

Analysis

Mean scores for each individual characteristic were calculated, as well as an aggregate

proposal score. The aggregate score on each proposal was simply the average of the

ratings on each of the individual components, with each component weighted equally.

Independent samples t-test were run to compare the scores of the individual characteristics

and the overall proposal scores before and after the lesson. This allowed us to determine

differences in student outcomes that might be attributed to the lesson design.

Results and Discussion

There were statistically significant improvements in three specific objectives (see Table 2).

The improvement that was observed in two of these, “vision or scope” and “research

questions and hypotheses are directly related” could be explained by the introduction of the

problem definition chart. This tool provides visual prompts to students with respect to these

two characteristics. Specifically, the multiple columns in the chart encourage research

teams to develop several research questions to address the broad business problem, as

opposed to just suggesting one or two questions, which was more typical prior to the

implementation of the new lesson. In addition, the design of the individual columns within

the chart guides students to focus on the connection between an individual research

question and its matching hypothesis. The improvement in “research questions are well

defined” as well as additional impact on “research questions and hypotheses are directly

related” may have occurred because the new lesson modeled these two characteristics to

students using multiple examples.

Table 2. Mean Scores on Research Proposals Prior and After the Implementation of the New Lesson

Learning Objective:

Mean

Score Prior

Mean Score

After

Vision or Scope *** 2.73

(.934)

3.29

(.839)

Information is Useful 3.95

(1.262)

3.96

(.999)

Research Questions are Well defined ** 3.35

(1.415)

3.89

(.853)

Research Hypotheses are Well defined 3.36

(1.390)

3.66

(1.068)

Research Hypotheses are Measureable/Testable 3.08

(1.686)

3.35

(1.548)

Research Questions and Hypotheses are Directly Related * 3.09

(1.593)

3.53

(1.230)

Aggregate Score over all Characteristics** 3.26

(.802)

3.61

(.696)

Notes: All items were rated on a 5-point numerical rating scale. Standard deviations are reported in parentheses.

* p < .10

** p< .05

*** p 0.05”

for inexact two-tailed testing, however, “p=0.04: pb0.05” for exact

one-tailed testing). With a given sample size, the result of one-tailed

testing would provide more accurate information whether a directional

research hypothesis is supported or not.

3. The confusion between the research and statistical hypotheses

3.1. Prevailing confusion

The previous section has shown that the reasons for supporting

two-tailed testing for directional research hypotheses tests are not

as legitimate as they first appeared. Additionally, the previous argu-

ments have made it clear that, because the directional statistical hy-

pothesis should be derived from the directional research hypothesis,

using two-tailed testing to test this directional research hypothesis

is not in accord with this basic rule. The authors believe that the cur-

rent overuse of two-tailed testing stems from the favorable views on

two-tailed testing and from researchers’ blindness because they are

unaware that they are breaking this basic rule. Let us now see how

this rule, which dictates the flow from the research hypothesis to

the statistical hypothesis, is often violated in the fields of marketing,

business, and other social sciences.

The following type of hypothesis expression appears frequently in

a large amount of the methodology literature.

H1. Males self-disclose more than females.

H0. There is no difference between males and females with respect to

self-disclosure (italics added) (Frey, Botan, & Kreps, 2000, p. 326).

It is clear that this example contains definitional confusion be-

tween the expressive mode of the research hypothesis (RH expressed

by a verbal statement about some testable relationship between con-

cepts) and that of the statistical hypothesis (H0 and H1 expressed by a

pair of complementary parameters). There is neither a clear distinc-

tion nor an exact linkage between the research hypothesis, RH, and

its statistical hypothesis, H0 and H1. “RH per se” is identified with

the alternative hypothesis H1, and the “opposite of RH” is identified

with the null hypothesis H0. In addition, the verbally expressed “H0:

…no difference…” and the verbally expressed “H1: …more than…”

are not complementary to each other. Under such confusion, (1) sta-

tistical testing would be misunderstood as a direct test between “H0:

the opposite of RH” and “H1: RH per se,” (2) because the verbally

expressed H0 involves only a “no difference (i.e., =) sign,” it is likely

to mislead us into believing that two-tailed testing is applicable, and

(3) the verbally expressed null hypothesis has been frequently mis-

understood as the “object of falsification” (e.g., Ledford, 2001; Miller

& Fox, 2001) or as the “object of strong inference” (e.g., Brinberg,

Lynch, & Sawyer, 1992; Ruscio, 1999), although the “straw-person”

null hypothesis has no basis in theory. One should remember that

“falsification (Popper, 1959)” is Popper’s alleged basic nature of scien-

tific knowledge and that “strong inference (Platt, 1964)” is a process

of pitting competing research hypotheses against each other where

each is based on a different theory.

However, if one clearly distinguishes the expressive mode of the

research hypothesis from that of the statistical hypothesis, then the

exact logical linkage between the research hypothesis (i.e., RH) and

its statistical hypothesis (i.e., H0 and H1) can be made as follows:

RH. “Males self-disclose more than females.”

H0. μMale≤μFemale [the negation of the assertion]

H1. μMale>μFemale [the assertion]

Evidently, because the null hypothesis, “H0: μMale ≤μFemale,”

includes the “≤” sign, there is no way to apply two-tailed testing

to it. It is worth noting that statistical testing is the test between

“H0: μMale ≤μFemale” and “H1: μMale >μFemale”, and this test result

should be used as indirect evidence for supporting or not-supporting

RH; that is, “rejecting/not-rejecting the ‘straw-person’ H0” leads to

“accepting/not-accepting the ‘substantive’ H1” and this result, in turn,

leads to “supporting/not-supporting RH,” respectively (e.g., Kerlinger

& Lee, 2000, p. 279).

Clarifying the difference between the research and statistical hy-

potheses is important; it prevents us from classificatory confusion

caused by the oversight of the coexistence of two different types of

research hypotheses. Almost all the research hypotheses derived

from theories can be classified into two types: the research hypothe-

sis in existential form (RHEF) and the research hypothesis in non-

existential form (RHNF). The former RHEF, which is most common,

involves verbal assertions about the existence of some testable

1264 H.-C. Cho, S. Abe / Journal of Business Research 66 (2013) 1261–1266

relationships between concepts, such as “has a negative influence on”

(e.g., Bagozzi, 1980), “is positively related to” (e.g., Loken & Ward,

1990), or “is greater than” (e.g., Biehal & Sheinin, 2007). However,

the latter RHNF is occasionally set up as a research hypothesis. It usu-

ally includes verbal assertions about the nonexistence of some testable

relationship between concepts, such as “has no influence on” (e.g.,

Bagozzi, 1980), “is unrelated to” (e.g., Loken & Ward, 1990), or

“there is no difference” (e.g., Biehal & Sheinin, 2007). Unfortunately,

RHEF has been frequently misunderstood as the alternative hypothe-

sis and RHNF as the null hypothesis (e.g., Balachander & Ghose, 2003).

In translating RHEF or RHNF into its own statistical hypothesis, it is

important to note the fundamental difference, as in the following

example:

RH1. “Regret has no influence on complaint intentions, whereas

satisfaction has a negative influence on complaint intentions” (Tsiros

& Mittal, 2000, H6; this is a good example involving both RHEF and

RHNF).

RH1a. “Regret has no influence on complaint intentions” (italics added).

[RHNF]

H0. γ11=0 (the assertion)

H1. γ11≠0 (the negation of the assertion)

RH1b. “Satisfaction has a negative influence on complaint intentions”

(italics added). [RHEF]

H0. γ21≥0 (the negation of the assertion)

H1. γ21b0 (the assertion)

When translating each of the two types of the research hypothesis

into its statistical hypothesis, as statisticians commonly note, the null

hypothesis H0 and the alternative hypothesis H1 should be mutually

exclusive and collectively exhaustive (e.g., Casella & Berger, 1990, p.

345); an equality sign (i.e., =, ≤, or ≥) should always appear in the

null hypothesis, H0 (e.g., Harnett & Soni, 1991, p. 331). The above ex-

ample clearly shows a distinction and a logical linkage between the

expressive mode of the research hypothesis (i.e., RHEF and RHNF)

and that of the statistical hypothesis (i.e., each pair of H0 and H1).

Note that in the case of translating RHEF, such as RH1b, into its statis-

tical hypothesis, researchers should place the assertion in the alterna-

tive hypothesis H1; however, when translating RHNF, such as RH1a,

into its statistical hypothesis, the assertion should be put in the null

hypothesis H0, rather than in the alternative hypothesis H1. Note

that this logic of proof to accept H0 in RHNF is weak. As Wonnacott

and Wonnacott (1984, p. 277) note, “Statistical testing does not pro-

vide a formal criterion to accept the null hypothesis.” If the test result

that the null hypothesis H0 of RHNF is accepted, it does not guarantee

that H0 is true. All one can do is adopt a smaller confidence interval so

that it includes the value assumed by H0 (frequently 0, but not always

0), then one can have stronger evidence for the null hypothesis (Frick,

1995; Nickerson, 2000). Researchers need to prepare a larger sample

size to achieve a small confidence interval.

The above examinations of the current definitional and classificatory

confusion with regard to the research and statistical hypotheses have

shown that without a clear distinction between the research hypothesis

and the statistical hypothesis, the basic rule of empirical testing where

the two expressive modes of the research hypothesis (i.e., RHEF and

RHNF) determine the form of the statistical hypothesis is easily violated.

As a result, a less rigorous form of statistical testing, such as adopting

two-tailed testing for a directional research hypothesis test, has become

a common practice. The current confusion involves other forms of mis-

understanding as well, such as starting from the use of logic of disproof

(i.e., proof by contradiction) as the basis for setting up the null and alter-

native hypotheses (e.g., Anderson, Sweeney, & Williams, 1999, p. 330)

and then moving back to specifying the research hypothesis and, as a

consequence, overlooking a certain category of the research hypothesis

(i.e., RHNF), which should be handled with additional carefulness. Un-

less researchers make a clear distinction between the research and sta-

tistical hypotheses and observe the correct relationships between them,

less rigorous use of statistical testing is hard to eliminate. Thus, while it

is less convenient and looks somewhat redundant, it is important to fol-

low the orderly sequence of the empirical testing procedure even when

researchers use statistical null hypothesis testing methods.

It may be thought that making a clear distinction between the re-

search hypothesis and the statistical hypothesis is simple enough. Al-

though it may look simple at a glance, it is not that easy, because the

problem lies between two different levels of methodology. Generally

speaking, there are three levels of methodology. The uppermost level

is that of scientific philosophy, in which the basic questions addressed

are “How can we draw a demarcation line between scientific knowl-

edge and all the other types of knowledge?” or “How can we assure

that scientific knowledge is making an advance?” The second level

of methodology is concerned with the development and test of a spe-

cific theory. It is related to logical and empirical validity of a theory.

Finally, the third level of methodology is more technical in nature.

Generally, it is related to how to conduct statistical testing. All three

levels are important, but no less important are the interrelationships

between and among the three levels. This article mainly discusses the

relationships between the second and third levels of methodology

and asserts that the second level of methodological decision dictates

the lower level consideration. Some researchers who pay attention

to the third level may miss the hierarchical relationships between

and among the three levels of the methodology.

3.2. Two legitimate situations for two-tailed testing

This paper does not propose the elimination of two-tailed testing

in the context of theory testing because there are two different

types of situations where two-tailed testing is appropriate. First,

when researchers do not have sufficient level of knowledge that pro-

vides for the directionality of the research hypotheses, the research

hypotheses may take the form of non-directional ones and the subse-

quent use of two-tailed testing is appropriate (e.g., Gravetter &

Wallnau, 2007, p. 253). Second, two-tailed testing should be used

when researchers set up non-directional research hypotheses (i.e.,

RHNF) pertaining to the nonexistence of some testable relationship

between concepts, such as “has no influence on” (e.g., Bagozzi,

1980), “has no relation to” (e.g., Loken & Ward, 1990), or “is no differ-

ence” (e.g., Biehal & Sheinin, 2007).

3.3. Four new definitions regarding research and statistical hypotheses

Now, based upon a clear distinction and upon an exact connection

between the expressive mode of the research hypothesis (i.e., RHEF

and RHNF) and that of the statistical hypothesis (i.e., H0 and H1),

the authors propose the following four new definitions with regard

to the research …