نوع مقاله : Original Article
کلیدواژهها
1. Introduction
Over the past few years, networks and the latest technologies have become increasingly widespread within the community. Therefore, the number of individuals utilizing computer software for instructional, vocational, and recreational goals has expanded. Perhaps one area that is considerably impacted by computer technology is education, specifically language education (Chen & Lee, 2018; Seyyedrezaie et al., 2013). Accordingly, the relationship between computer technology and language education becomes more powerful each year.
One of the Web 2.0 instruments that has become extremely popular in language teaching and learning is blogging. A blog supports learner interactivity, enhances student engagement, and provides conditions for cooperation and knowledge innovation (Halic et al., 2010). So, by providing a collaborative environment, blogging is a suitable instrument for teaching writing (Solomon & Schrum, 2010; Zhou, 2015).
Regarding writing instruction, blogs have many advantages. Blogs assist learners to communicate and cooperate in the target language beyond the classroom, provide students with opportunities to express their opinions at their own speed and in their own space, and encourage learners’ accountability through self-publishing (Alharbi, 2015; Sun, 2009).
One of the areas that has been studied about blogging is corrective feedback, especially peer feedback. Peer feedback establishes an authentic collaborative situation in the classroom environment. In such a learning context, learners are motivated to provide feedback on each other’s writing, every learner’s assistance is appreciated, and their self-confidence increases (Ernst, 2005). Ertmer et al. (2007) stated that despite learners’ intentions for online peer feedback, professor feedback remains valuable and enables learners to gain higher comprehension. Also, Kurt and Atay (2007) suggested that peer feedback can be applied to decrease writing anxiety and boost learners’ confidence.
Concerning the influence of using corrective feedback in blogs on second/foreign language learning, few studies have focused on its impact on learners’ achievement of specific language skills, such as writing (Arslan, 2014; Norouzi et al., 2025; Wei et al., 2023; Wondim et al., 2024). Although there are many investigations into the efficacy of online environments on writing, few inquiries have been conducted to examine the effects of various kinds of feedback in different online instructional environments.
Accordingly, the present investigation seeks to explore whether prompts and recasts in face-to-face and blog-integrated writing instruction have any statistically differential effects on the essay writing of EFL students, and whether there is a statistically significant difference between the essay writing of EFL students in blog-integrated writing instruction compared to face-to-face instruction. Also, it aimed at investigating the combined effect of the type of instruction (blog/ face-to-face) and feedback (recast/ prompt).
2. Literature Review
2.1. Blogging Application into the Educational System
E-learning reduces communication gaps between learners and instructors, and offers cost-effective solutions (Fahim et al., 2011). With the advent of technology, many EFL teachers use online tools (emailing) and mostly collaborative tools (blogging, wikis) to improve students’ writing performance (Beldarrain, 2006; Fathi et al., 2021; Guo & Li, 2024; Li, 2023; Taheri et al., 2025). As mentioned, one of these tools is the blog. In learning, blogs are experimentally applied as instruments to improve reading comprehension and writing skills. Implications reveal that although blogging should not substitute face-to-face interaction, it is going to contribute to an exercise environment where learners can ponder, reflect, and gain language gradually for a real-life audience (Pinkman, 2005).
Many studies revealed the effectiveness of using weblogs in writing classes (Alsamadani, 2018; Azari, 2017; Campbell, 2003; Jones, 2007; Wang, 2007). In this regard, Campbell (2003) analyzed computer-supported cooperative learning employing weblogs. The results of his study revealed that weblogs were beneficial for cooperative language learning and increased the students’ interest in writing.
In addition, Jones (2007) examined the effectiveness of employing blogs in an ESL writing class. It was revealed that students found blogs to be an effective instrument for writing instruction. Furthermore, the findings of his study revealed that blogs promoted the learners’ critical thinking skills; influenced the quality of learners’ writing; fostered meaningful learning; and motivated learners to write by publishing for an authentic audience.
Moreover, Wang (2007) considered weblogs as a peer-editing environment and investigated EFL students’ perceptions of experience in peer editing exercises in blogs in comparison with face-to-face classes. The outcomes demonstrated that blog-based peer-editing gave students more influential support than face-to-face classes. Additionally, Azari (2017) examined the effect of weblog use on the writing performance and students’ level of autonomy during a process-based writing course. The outcomes showed that utilizing weblogs in line with the process-based instruction aided learners in producing a far better writing performance than students experienced in in-class writing teaching.
Recently, Alsamadani (2018) examined the efficacy of blogs for learners’ group and individual writing skills. The outcomes indicated that blogging-based writing practice improves the learners’ writing skills in word choice, content, language mechanics, and style. The conclusions of Kılıçkaya (2019) mentioned that concordance and metalinguistic feedback were the most favored by pre-service language teachers. Moreover, the most efficient method to support learning was immediate feedback.
2.2. Corrective Feedback on Writing Performance
Many language investigators, such as Bitchener et al. (2005) and Ferris (2011), believe that the way of presenting corrective feedback to students’ writing is an important issue in writing classes. However, what would be the most effective kind of feedback to develop students’ writing skills, and what kind of feedback would fit with the requirements of particular students, is under debate. The previous studies revealed that in foreign language classes, feedback can be provided by teachers, peers, and learners themselves (Yang, 2009). Recently, the investigation of Naderi Farsani et al. (2024) emphasized the effect of corrective feedback on students’ writing enhancement.
One example of implicit feedback that has gained rising attention is recast – a reformulation of the student’s non-target statement with the authentic meaning intact. Recast, one of the corrective feedback techniques, has been examined in research investigations, including those by Nassaji et al. (2023), Yin (2021), Goo (2020), and Banaruee et al. (2018). Similarly, Doughty (2001) and Long (2007) believed that recast supplies the student with a perfect chance to produce the cognitive comparison between their incorrect utterance and the teacher’s reformulation to detect the gap between target-like and non-target-like states. Besides, Doughty and Varela (1998) informed that students who experienced corrective recasts outperformed the control group in written and oral production. Moreover, Nassaji (2009) examined recasts vs. elicitation feedback and their subsequent impacts on grammatical aspects of speaking and writing performance. The finding confirmed that recasts were more influential than elicitations in immediate impacts. The consequences of Banaruee et al. (2018) confirmed that both writing groups (recasts and direct corrective feedback) formed significant improvement in their writing; moreover, the recast group outperformed the direct corrective feedback group. Results of Schenck (2020) suggested that explicit corrective feedback, such as metalinguistic and direct feedback, seems more helpful for semantically and syntactically complicated features, for example, the past hypothetical conditional and English article. Finally, the conclusions of Arianfar et al. (2022), Boston (2021), Sweilam (2020), and Kourtali and Révész (2020) confirm the outcomes of numerous studies that discovered recasting to be useful to supply worthwhile opportunities for students.
In the same way, the prompt is defined by Lyster (2004) as elicitation, clarification requests, and metalinguistic clues connected to the well-formedness of the student’s statement. Prompt provides the learners with the opportunity to self-repair, while recasts can guide only learners to a repetition of correct forms. Moreover, Lyster (2007) mentioned that self-repair following explicit feedback, such as a prompt, demands a more in-depth processing than the repetition of the instructor’s recast.
Many investigations compared explicit kinds of corrective feedback, such as a prompt with recasts (Ammar & Spada, 2006; Ammar, 2008; Lyster, 2004). The findings of these studies revealed that the test scores of learners who received recasts did not improve as much as those who received prompts. Recently, Li and Iwashita (2021) examined the comparative impacts of negotiated prompts and recasts on interactions of EFL learners, primarily focusing on grammatical accuracy. Their result revealed that negotiated prompts proved effective for the grammatical accuracy of regular, irregular past tense verbs and interrogative questions.
A number of investigations examined the impacts of corrective feedback in Computer-Assisted Language Learning (CALL) (Jiang & Eslami, 2022; Sachs & Suh, 2007; Shirani, 2020; Shintani, 2016; Yamashita, 2021; Yilmaz, 2012). These analyses have nearly focused on CALL to assess the efficacy of corrective feedback, for instance, Shintani (2016), and Yilmaz (2012); on the other hand, a small concentration has been yielded to supplying corrective feedback in online communication modes (Aeen et al., 2022; Rassaei, 2022). Although there are some studies on writing performance and the comparative effects of corrective feedback, this investigation emphasizes the gap in the literature, focusing on the differential effects of recast and prompt on EFL students’ writing performance in a weblog-based environment.
Concerning the gap in previous studies, four research questions were addressed:
3. Method
3.1. Participants
The study participants were 80 male and female EFL learners chosen out of 96 learners from Farhangian University, Beheshti, and Hasheminezhad campus, with an age range between 19 to 24 during one semester. They were all junior learners because they had to have passed the grammar courses, writing courses 1 and 2, to ensure that they had knowledge of essay writing.
3.2. Materials and Instruments
The study instruments were the Preliminary English Test (PET) and two essay writing tests.
3.2.1. PET
The investigation was accomplished with 80 EFL university learners selected out of 96 according to their scores on language proficiency tests. A 67-item standard PET test, released by the Cambridge ESOL exam, was administered to calculate the students’ general English proficiency level.
3.2.2. Essay Writing Test
A writing test was administered to the students of both blog-integrated and face-to-face groups as a pretest to evaluate their ability to write an essay before the treatment. A piloted sample writing test, which revolved around the topic “computers”, was administered to the participants before the treatment as a pretest evaluation of their writing performance. Another writing test, which revolved around the topic “changes in the 21st century”, was employed as a post-test to assess students’ writing at the end of the treatment phase. The writing test instructed participants to write an essay with a minimum essay length of 200 words and a time limit of 40 minutes. Participants responded to pre- and post-test topics while adhering to specific criteria. The writing performance of participants in both the pre-test and post-test was assessed using the analytical scoring rubric developed by Jacobs et al. (1981), which evaluates essays based on the criteria of content, organization, vocabulary, language use, and mechanics. Two independent raters were recruited to ensure the consistency of the scoring process, and inter-rater reliability was calculated using Cohen’s Kappa, which indicated a high degree of consistency with a reliability index of 0.80.
3.3. Procedure
During the preparatory stage of the present study, to homogenize the group of participants, all 96 junior students were presented with a version of the PET, and the outcome showed that it had a reliability of 0.91. Merely 80 learners were selected to participate in the research. The students were assigned to experimental and control groups randomly. There were 40 learners in the control group exposed to face-to-face writing teaching and 40 learners in the experimental group receiving blog-integrated writing instruction. Subsequently, the experimental and control groups were divided into two subgroups; one received a prompt, and the other received a recast. There were four distinct groups, each with its specifications and purpose within the study. Figure 1 displayed the participants divided into four groups. Also, to reduce learners’ anxiety and familiarize them with blogging, two face-to-face sessions were conducted for participants before the first session.
Figure 1
Classification of Participants in the Research
Following the instructions on practicing blogs, the learners needed to establish their blogs for publishing essays and receiving comments on their essays. Also, the teacher’s blog was employed for sharing language content, opinion exchange, instructional materials, and online reports. Moreover, the teacher’s blog included links to students’ weblogs.
One day before the first session, all students took a writing pretest to evaluate their ability to write an essay. During this treatment, learners were instructed on how to write an essay, and they were supposed to write some drafts on which the students received the instructor’s comments concerning recast and prompt. Next, learners modified their drafts by applying this knowledge and presenting them to the teacher through the blog or in a face-to-face class. Also, a posttest was utilized to assess the students’ writing after receiving instruction.
The students received recasts or prompts for their initial drafts based on the subgroup they belonged to and criteria based on an analytical scoring rubric developed by Jacobs et al. (1981). Blog-based learners were supposed to post their drafts to their blogs to receive feedback from the teacher. The instructor read the draft, provided the students with a recast or prompt on the basis of the subgroups to which students belonged, and commented on other elements of writing, for instance, logical sequence. In face-to-face teaching and blog-integrated instruction, for prompts, the teacher elicited the proper structure from learners by giving information, comments, or questions to guide them to self-repair. On the other hand, in the case of recast, the teacher reformulated the learners’ sentences, and learners were not supplied with a chance to reconsider their erroneous sentences by themselves. The teacher gave the corrective feedback to EFL learners according to the criteria of content, organization, vocabulary, language use, and mechanics for both face-to-face and blog-integrated groups.
The learners who published their drafts in their weblogs would comprehend how the structures were flawed, make modifications, and post the revised drafts to their blogs again. In addition, they could see their peers’ drafts and read the instructor’s suggestions posted on their classmates’ blogs. The exact procedure was done in the face-to-face group. But they did not have the opportunity to see teacher feedback on their classmates’ essays. To know whether the ratings assigned by the two raters were consistent with each other, a correlation was calculated between the two raters. The results indicated it came out to be 0.80.
3.4. Data Analysis
The study participants were chosen by convenience sampling. This quasi-experimental analysis utilized a pretest-posttest design to compare the two kinds of corrective feedback concerning essay writing. The students were divided randomly into a face-to-face writing teaching group and a blog-integrated writing teaching group. Writing performance was the dependent variable. Also, the type of teaching was the independent variable; the kind of corrective feedback (recast/ prompt) was another independent variable of the analysis.
All data from the current study were analyzed using SPSS 28 statistical software package. Data input and processing were done with the same software. The scores of the pre-test and post-test were collected to examine the data; then, to ascertain the normality assumption of the distributed scores in blog-based and face-to-face groups, a one-sample Kolmogorov-Smirnov test was calculated. Standard deviations and mean scores were utilized to measure descriptive statistics. An independent sample T-test was calculated to discover the difference between the writing mean scores of face-to-face and blog-based groups before the treatment.
A Two-way between-groups ANOVA was calculated to investigate the impacts of face-to-face writing teaching compared to blog-integrated writing teaching (one independent variable having two levels) on students’ essay writing as a dependent variable; for examining the effects of recast compared to prompt on students’ essay writing; also, for investigating the interactional effects of these two sets of independent variables (recast/ prompt and face-to-face writing teaching/ blog-integrated writing teaching) on learners’ essay writing individually in each central group or generally in the comparison of both main groups.
4. Results and Discussion
4.1. Results
The study participants were 80 EFL students. A PET was used to establish a homogeneous group of participants in terms of English language proficiency. Table 1 displays the descriptive statistics of PET used for homogenization.
Table 1
PET Descriptive Statistics
|
N |
Minimum |
Maximum |
Mean |
Std. Deviation |
Variance |
Skewness |
Error of Skewness |
Kurtosis |
Error of Kurtosis |
|
80 |
59.00 |
77.00 |
67.61 |
3.24 |
9.50 |
0.122 |
0.309 |
0.375 |
0.608 |
As indicated in Table 1, the standard deviation and mean were SD = 3.24 and M = 67.61, respectively. Since this figure fell between -1.96 and +1.96, the distribution can be considered to be normal.
Besides, a one-sample Kolmogorov-Smirnov test was calculated to check the normality assumption of the distributed scores in blog-based and face-to-face groups in Table 2.
Table 2
One-Sample Kolmogorov-Smirnov Test
|
|
Test score (Blog-integrated) |
Test score (Face-to-Face) |
|
N Normal Parameters Mean Std. Deviation Kolmogorov-Smirnov Z Asymp. Sig. (2-tailed) |
40 67.42 3.03 0.539 0.09 |
40 67.81 3.54 0.539 0.09 |
Since the sig value is larger than .05, it can be stated that the two groups were normally distributed.
Table 3
Descriptive Statistics of Writing Post-test Scores
|
Instruction |
Feedback |
Mean |
Std. Deviation |
N |
|
Blog |
Prompt |
18.25 |
2.320 |
20 |
|
Recast |
14.23 |
2.580 |
20 |
|
|
Total |
16.24 |
2.461 |
40 |
|
|
Face-to-Face |
Prompt |
16.10 |
2.520 |
20 |
|
Recast |
15.00 |
2.385 |
20 |
|
|
Total |
15.55 |
2.452 |
40 |
|
|
Total |
Prompt |
17.17 |
2.420 |
40 |
|
Recast |
14.61 |
2.469 |
40 |
|
|
Total |
15.89 |
2.456 |
80 |
Table 3 represents the means of face-to-face (M = 15.55, SD = 2.45) and blog-based (M = 16.24, SD = 2.46). To examine the first research question, comparing the differential effects of a weblog-based environment and face-to-face teaching on EFL students’ essay writing, a Two-Way ANOVA was conducted in Table 4.
Table 4
Tests of Between-Subjects Effects (Writing Posttest Scores)
|
Source |
Type III Sum of Squares |
df |
Mean Square |
F |
Sig. |
Partial Eta Squared |
Noncent. Parameter |
Observed Powerb |
|
Corrected Model |
5921.533a |
3 |
1645.327 |
254.358 |
.000 |
.754 |
811.187 |
1.000 |
|
Intercept |
19368.067 |
1 |
19368.067 |
3114.524 |
.000 |
.969 |
3165.203 |
1.000 |
|
Instruction |
2571.130 |
1 |
2571.130 |
355.142 |
.041 |
.803 |
361.487 |
1.000 |
|
Feedback |
2965.318 |
1 |
2965.318 |
441.165 |
.021 |
.867 |
422.509 |
1.000 |
|
Instruction * feedback |
1582.116 |
1 |
1582.116 |
245.875 |
.020 |
.736 |
224.884 |
1.000 |
|
Error |
513.395 |
76 |
7.120 |
|
|
|
|
|
|
Total |
26948.012 |
80 |
|
|
|
|
|
|
|
Corrected Total |
7201.672 |
79 |
|
|
|
|
|
|
Dependent Variable: the score of essay writing
a. R Squared = 0.729 (Adjusted R Squared = 0.709)
b. Computed using alpha =0.05
The findings of the Two-way ANOVA revealed a significant difference between the writing performance of the experimental and control groups, F (1, 76) = 361.487, P =0.041, with a large effect size, partial eta squared = 0.803. Considering the mean scores of writing performance (Table 3), it is revealed that the learners in the blog-based environment outperformed those in the face-to-face classroom. Consequently, the first null hypothesis was rejected.
The second research question examined whether prompts and recasts have any differentially significant impacts on EFL students’ essay writing. As indicated in Table 4, there is a significant difference between the writing performance of EFL students receiving a prompt (M = 17.17) compared to those receiving a recast (M = 14.61), F (1, 76) = 422.509, with a large effect size, partial eta squared = 0.867, P=0.02. It can be seen that prompt has differentially significant effects on EFL learners’ writing performance. EFL learners receiving prompt outperformed those who had received recast in both face-to-face and blog groups. Consequently, the second null hypothesis was rejected.
The third null hypothesis expressed that there is no statistically significant difference between the essay writing of students who receive prompts in the face-to-face classrooms in comparison to students in the blog-based environment. To explore the third null hypothesis, the findings of the Two-Way ANOVA were considered (Table 4).
Table 3 indicated that the writing scores of learners receiving prompts through a weblog-based environment (M=18.25) were larger than those in the face-to-face classroom (M=16.10). To conclude whether there is a significant difference, the findings of the Two-way ANOVA illustrated the Sig. Value (0.02) for interaction between feedback types and instruction was smaller than 0.05, F (1, 76) = 224.884, P =0.02. Therefore, it can be found that there is a statistically significant difference between the essay writing of students receiving prompts in a face-to-face classroom and those in a blog-based environment.
The fourth research question asked if there is any statistically significant difference between the essay writing of students in the recast group in the face-to-face classroom in comparison to those through the blog-integrated environment.
Table 3 illustrated that in the face-to-face instruction, the mean score for the students’ essay writing that received recast (M=15) was larger than that in the blog group (M=14.23). The findings of the Two-way ANOVA (Table 4) demonstrated that there is a statistically significant difference between students’ essay writing in the recast group in the face-to-face classroom in comparison to those through the blog-integrated environment; therefore, EFL learners receiving recast outperformed in a face-to-face instruction. It can be concluded that there is an interaction between the kind of instruction and the kind of feedback they received. It was informed that the learners’ writing performances altered based on the kind of feedback they received in relation to the type of instructional models they were exposed to. Consequently, the fourth null hypothesis was rejected.
4.2. Discussion
This investigation aimed at exploring the possible differential impacts of different types of feedback (recast/prompt) on the essay writing of learners exposed to blog-integrated writing instruction vs. face-to-face one. Feedback supplies the students with the chance to compare their own responses with correct forms (Doughty, 2001). Regarding the impact of corrective feedback, the findings of this investigation revealed the superiority of students’ posttest writing scores in comparison with their pretest ones in two subgroups (recast/prompt).
Considering the first research question, it was demonstrated that the essay writing of students in the blog-integrated instruction was enhanced more significantly than that of the other students in the face-to-face instruction. This finding is consistent with Campbell (2003) and Zhang (1995), who revealed that the writing of the learners using weblogs for their writing assignments was enhanced more than that of peers exposed to the face-to-face writing class. One explanation is the online nature and user-friendly features of blogs, which encourage students’ motivation and engagement in writing. Also, it is supported by some studies, which emphasize the effectiveness of blogs as an influential instrument for writing skills (Noytim, 2010; Zhou, 2015).
Another justification behind the blog’s effectiveness was that the blog provides the learners with the opportunity to read all their peers’ essays at any time and location they desire, while in face-to-face instruction, they did not have such an opportunity (Richardson, 2010). In addition, Richardson (2010) supported the finding of this study, which revealed that feedback provided through blogs is more flexible than face-to-face feedback. Concerning L2 writing learning, the existing body of empirical research displayed the positive role of Web 2.0 technologies in enhancing writing competencies (e.g., Bikowski & Vithanage, 2016; Fathi & Nourzadeh, 2019; Strobl, 2014). However, Khany and Boghayeri (2013) posited a contradictory view that underestimated the efficacy of using Web 2.0 instruments for teaching writing skills. In their study, they mentioned that in spite of the usefulness of Web 2.0 instruments in foreign language teaching, there were some limitations on employing these instruments, which cannot be ignored. In their study, they revealed that teachers’ unfamiliarity with these tools is one of these constraints.
Regarding the second research question, the data analysis showed that EFL learners receiving prompts outperformed those who had received recasts in both face-to-face and blog groups. Hence, the current findings regarding the usefulness of prompts for improving learners’ writing performance can be substantiated by the findings obtained by Ellis (2006), who claims that metalinguistic feedback is more beneficial than recasts. The finding, nevertheless, runs contrary to the one received by Banaruee et al. (2018), who claimed that recast led to better writing performance compared to direct feedback. To put it in a nutshell, it can be argued that such differences in findings regarding the efficacy of recasts and prompts can be context-bound or be influenced by the targeted skill.
Considering the third research question, the findings indicated that the improvement of learners’ essay writing depended on the kind of feedback combined with the kind of instructional environments they received. Prompts in blogs led to better writing performance than those in a face-to-face writing environment. This result supported Razagifard and Razzaghifard (2011), who revealed a statistically significant difference between the efficacy of explicit feedback in face-to-face and online instructional environments. This result is in line with the results of Heift (2004), which supported the superiority of prompt over recast in the online environment and showed that the students in the prompt group outperformed those receiving explicit feedback. Also, this result is in accordance with the findings of Panova and Lyster (2002), who aimed to investigate the efficacy of recasts in face-to-face instruction. The findings of their study showed that the least improvement resulted from recast, while it revealed that explicit corrective feedback, such as a prompt, led to the most improvement in the learners’ essay writing. Moreover, this finding followed Loewen and Philp (2006) regarding recast; they consider that recast is not very influential due to the fact that students are just given the correct structure, and the important role of self-repair is neglected. In addition, the conclusion is in line with Li and Iwashita (2021), showing the effectiveness of prompts for increasing grammatical accuracy. However, it contradicts the finding of Loewen and Erlam (2006), who explored the relative efficacy of prompts and recasts performed in text-chat interaction. The results of their study revealed no significant benefit for either feedback kind over the control and experimental group. One justification behind the effective role of blog-integrated instruction in increasing the effectiveness of prompts may be due to the nature of blogs, giving the students the option to negotiate with their peers; negotiation of information is the fundamental element of prompts to achieve the proper form of students’ utterance.
Explanation of the superiority of prompts over recasts in face-to-face instruction and online environments may be due to the different nature of these two kinds of feedback. Precisely showing the existence of a mistake and causing the students to modify their own errors, the prompt was more effective. Whereas, in the case of recasts, the students had no opportunity to revise their own errors or to construct the pushed output because their peers provided them with the correct form (Swain, 1985).
Regarding the last research question, the results indicated that recasts in face-to-face writing teaching were more effective than this kind of feedback via blog-based writing instruction. It is in line with the findings of Cabaroglu et al. (2010), who indicated that recast was more effective for students’ writing performance than computer-mediated recast in a face-to-face group. The effectiveness of recasts in face-to-face teaching compared to blog-based writing teaching may be due to the various effects of these instructional environments on reducing the ambiguity of recasts. Since recast is ambiguous, sometimes the learners are required to know why their sentences were incorrect. In the face-to-face class, the learners who had provided the comments through recasting could question them about the reasons of why they revised their sentences or words in other ways. While learners in a blog-integrated group lacked such an opportunity, their peers were unknown to them in the online context.
5. Conclusion and Implications
Regarding the first research question, the results of the Two-way ANOVA indicated that blog-integrated learners who received prompts as a type of corrective feedback outperformed those exposed to recast. Regarding the second research question, the findings revealed that blog-based writing instruction was more influential than face-to-face instruction, though the essay writing of learners of both models of instruction improved.
Moreover, it was demonstrated that there is a statistically significant interaction between the type of feedback and the type of instructional environment. The results displayed that the quality of learners’ essay writing changes regarding the kind of corrective feedback in relation to the sort of instruction the students received. The findings might also have significant pedagogical implications for EFL teachers and language institutions in incorporating blogs into writing instruction to enhance their writing performance. Furthermore, the results of this study suggest that the use of blog-integrated writing can be particularly beneficial for Iranian EFL learners, who often struggle with writing skills in English. Ultimately, the findings have significant practical implications for the broader area of L2 writing literature and technology-enhanced writing instruction. With regard to the implications of this study, it may be implied that the positive effects of blog-integrated writing on EFL learners’ writing performance contribute to the growing body of research that advocates for the use of technology in language learning. The study’s implications for curriculum designers, EFL educators, and policy makers are to consider integrating technology-assisted writing instruction into the EFL writing curriculum.
The findings of this research provided a novel approach to recognizing and evaluating individuals’ behaviors, beliefs, potentialities, and interpersonal skills regarding employing blogs in their essay writing. Also, it added to learning in EFL writing since it explained how EFL writing learners responded to the application of blogs for essay writing. With the information obtained, it is achievable for investigators, foreign language instructors, instructional technology, and curriculum planners to attain an understanding of how learners employ blogs for writing skills progress, self-regulated learning, and automaticity. To sum up, the necessity for more analyses to explore various measures of feedback assessment is required in future studies to expand the knowledge of second language acquisition investigators and English as a foreign language teachers (Nassaji, 2020). Additionally, future studies will be needed to evaluate the significance of comprehending the emotional dimensions of writing revision (Ke & Zhou, 2024).