Advertisement

Evaluating Errors and Improving Performance of Chatgpt: a Research Paper

Review Article | DOI: https://doi.org/10.31579/2835-2882/019

Evaluating Errors and Improving Performance of Chatgpt: a Research Paper

  • Som Biswas 1*

Department of Radiology, Le Bonheur Hospital, The University of Tennessee Health Science Center, Memphis, Tennessee, USA.

*Corresponding Author: Som Biswas, Department of Radiology, Le Bonheur Hospital, The University of Tennessee Health Science Center, Memphis, Tennessee, USA.

Citation: Som Biswas, (2023), Evaluating Errors and Improving Performance of Chatgpt: a Research Paper, Clinical Research and Studies, 2(3); DOI:10.31579/2835-2882/019

Copyright: © 2023, Som Biswas. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: 09 May 2023 | Accepted: 19 May 2023 | Published: 30 May 2023

Keywords: bowel disease; NGAL; IBD

Abstract

Chatgpt, a state-of-the-art language model developed by OpenAI, has revolutionized the field of natural language processing and human-computer interaction. However, despite its impressive capabilities, Chatgpt is not immune to errors and limitations. This research paper aims to identify common errors made by Chatgpt and propose potential methods to improve its performance. By analysing the underlying causes of these errors and exploring strategies to mitigate them, we can enhance the overall user experience and reliability of Chatgpt.

Introduction

Background on Chatgpt and its Significance in Natural Language Processing

Natural Language Processing (NLP) is a field of artificial intelligence (AI) that focuses on enabling computers to understand, interpret, and generate human language. Recent advancements in deep learning techniques, particularly in the area of language models, have significantly impacted NLP applications. Chatgpt, developed by OpenAI, is one such language model that has gained considerable attention due to its impressive language generation capabilities.

Chatgpt is based on the Transformer architecture, which employs self-attention mechanisms to capture contextual relationships between words in a sentence. It is a variant of GPT (Generative Pre-trained Transformer) and has been trained on a vast corpus of text data from the internet, making it proficient in generating coherent and contextually relevant responses to user inputs.

The significance of Chatgpt lies in its ability to engage in dynamic and interactive conversations with users, simulating human-like dialogue. It has applications in various domains, including customer support chatbots, virtual assistants, language translation, content generation, and more. Chatgpt enables seamless interactions between humans and machines, fostering improved user experiences and increasing the accessibility of AI-powered language processing tools.

By leveraging the power of large-scale language models like Chatgpt, NLP applications have made remarkable progress in automating tasks such as text completion, sentiment analysis, summarization, and question answering. Chatgpt natural language understanding and generation capabilities provide opportunities for more efficient and personalized human-computer interactions, driving advancements in fields such as healthcare, education, and customer service.

However, despite its numerous strengths, Chatgpt is not without its limitations. It can sometimes produce incorrect or nonsensical responses, fail to capture subtle nuances in language, and be sensitive to input phrasing or context. Understanding and addressing these errors is crucial for further improving the performance and reliability of ChatGPT and advancing the field of NLP.

By analysing the errors made by Chatgpt and exploring strategies to mitigate them, researchers aim to enhance its language comprehension, context awareness, and response generation capabilities. This research paper aims to contribute to the ongoing efforts in refining chatgpt, enabling more robust and accurate AI-powered language interactions and unlocking its full potential in various real-world applications [1].

Importance of identifying and addressing errors in language models.

Identifying and addressing errors in language models is of paramount importance in the field of natural language processing (NLP). Here are several key reasons why it is crucial to focus on error identification and mitigation:

Improving User Experience: Language models are designed to assist and interact with users, providing accurate and relevant information. Errors in language models can lead to misunderstandings, incorrect responses, or misleading information, resulting in a poor user experience. By identifying and addressing errors, we can enhance the reliability and effectiveness of language models, leading to more satisfying user interactions.

Enhancing Communication Efficiency: Language models are often used to automate tasks, provide information, or handle customer support inquiries. Errors in language models can lead to inefficiencies, requiring users to repeat or rephrase their queries and leading to frustration. By reducing errors, we can streamline communication processes and improve overall efficiency in human-computer interactions.[2]

Ensuring Ethical and Responsible AI: Language models can inadvertently generate biased, offensive, or harmful content. Identifying and mitigating errors in language models is essential for addressing these ethical concerns and ensuring responsible AI practices. By minimizing errors, we can reduce the likelihood of biased outputs, offensive language, or misinformation, thus promoting fairness, inclusivity, and ethical standards in AI systems.

Enhancing Trust and Reliability: Language models are often used in critical applications, such as healthcare, legal, or financial domains, where accuracy and reliability are paramount. Errors in language models can erode trust and confidence in AI systems, leading to skepticism and reluctance in adopting these technologies. By addressing errors, we can enhance the trustworthiness and reliability of language models, fostering greater acceptance and utilization in real-world applications.

Advancing State-of-the-Art NLP: Identifying and analysing errors in language models provides valuable insights into their limitations and areas for improvement. By understanding the causes of errors, researchers can develop novel techniques, algorithms, and architectures to overcome these limitations and push the boundaries of NLP research. The iterative process of error identification and mitigation drives advancements in the field, resulting in more powerful and accurate language models [3].

Enabling Real-World Applications: Language models with reduced errors have broader applicability across industries and domains. By addressing errors, language models can become more effective in information retrieval, content generation, language translation, virtual assistance, and other NLP tasks. This, in turn, facilitates the deployment of AI-powered systems in real-world scenarios, enabling organizations and individuals to harness the benefits of NLP technologies.

Objectives of the research paper.

The objectives of the research paper on analysing errors and enhancing the performance of chatgpt are as follows:

Identify Common Errors: The research aims to analyze and categorize the common errors made by chatgpt during natural language conversations. This involves identifying grammatical errors, semantic errors, contextual errors, ambiguity resolution challenges, factual inaccuracies, and inappropriate responses.

Understand Causes of Errors: The paper seeks to investigate the underlying causes of errors in chatgpt. This involves examining limitations in training data, challenges in disambiguation and context preservation, biases in language models, and issues related to factual knowledge representation.

Propose Mitigation Strategies: The research paper aims to propose effective strategies to mitigate the identified errors in chatgpt. This involves exploring fine-tuning techniques, transfer learning and domain adaptation approaches, reinforcement learning with user feedback, context-awareness enhancements, and ethical considerations in error mitigation [4].

Evaluate Performance Improvements: The paper intends to conduct experimental evaluations to assess the effectiveness of the proposed error mitigation strategies. This involves quantitatively measuring error rates, comparing performance metrics, gathering user feedback, and conducting subjective evaluations to determine improvements in ChatGPT's performance.

Discuss Remaining Challenges and Future Directions: The research paper seeks to discuss the remaining challenges and limitations in error mitigation for chatgpt. It aims to highlight areas that require further research and development, addressing issues such as biases, privacy concerns, and harmful outputs. The paper also explores potential ethical considerations and societal implications related to error analysis and mitigation in language models.

Methodology

Description of the dataset used for error analysis.

The dataset used for error analysis in the research paper comprises a collection of user interactions with chatgpt. This dataset is specifically created to capture a wide range of conversational scenarios and expose potential errors made by the language model. The dataset can be constructed through various methods, including but not limited to:

Human-Generated Conversations: Human annotators engage in conversations with chatgpt, simulating different user scenarios and covering various topics. These conversations can be collected through crowd-sourcing platforms or by recruiting domain experts [5].

Simulated User Interactions: Instead of using human annotators, simulated user interactions can be generated using predefined user personas, templates, or dialogue simulation frameworks. This approach allows for controlled experiments and the exploration of specific error types or linguistic phenomena.

Crowdsourced Dialogue Collection: Large-scale dialogue datasets, such as the Persona-Chat dataset, can be utilized. These datasets consist of conversations between two or more individuals and can be adapted for error analysis by treating one side of the conversation as the user interacting with chatgpt.

Scraped Dialogue Data: Conversations from publicly available sources, such as online forums, social media platforms, or customer support chats, can be scraped and anonymized to create the

dataset. This approach provides real-world conversational data that reflects diverse user inputs and potential errors.

The dataset should include a variety of conversational patterns, language styles, and potential error triggers to ensure comprehensive error analysis. It should cover a broad range of error categories, including grammatical errors, semantic errors, contextual errors, ambiguity resolution challenges, factual inaccuracies, and inappropriate responses.

To facilitate the error analysis process, the dataset may also include annotations or labels indicating the specific error types present in each conversation or turn. These annotations can be provided by human experts who identify and categorize the errors based on predefined criteria.

By utilizing a carefully curated dataset, researchers can conduct a thorough error analysis, identify patterns, and gain insights into the specific types of errors made by chatgpt. This forms the basis for proposing effective error mitigation strategies and enhancing the performance and reliability of the language model.

Evaluation metrics and techniques employed for error identification.

In the research paper on error analysis and performance enhancement of chatgpt, several evaluation metrics and techniques can be employed for error identification. Here are some commonly used techniques and metrics in error analysis:

Human Evaluation: Human evaluation involves human annotators assessing the quality of Chatgpt responses. Annotators can rate the responses on various scales, such as fluency, relevance, grammaticality, and overall quality. This subjective evaluation provides valuable insights into errors related to semantics, context, and appropriateness.

Automatic Evaluation Metrics:

Perplexity: Perplexity is a commonly used metric that quantifies how well a language model predicts a given text. Higher perplexity values indicate higher uncertainty and potential errors in language generation.

BLEU (Bilingual Evaluation Understudy): BLEU is a metric that measures the similarity between the generated responses and a set of reference responses. It can provide an indication of the quality and appropriateness of the generated text[6].

ROUGE (Recall-Oriented Understudy for Gisting Evaluation): ROUGE is a set of metrics commonly used for evaluating the quality of text summarization. It measures the overlap between the generated response and the reference summaries.

METEOR (Metric for Evaluation of Translation with Explicit ORdering): METEOR is a metric that considers both exact word matches and paraphrases, providing a comprehensive evaluation of the generated text.

Error Annotation: Error annotation involves manually annotating the dataset to identify specific error types. Human annotators can mark grammatical errors, semantic inconsistencies, factual inaccuracies, inappropriate responses, and other error categories. These annotations help in quantifying and analysing the distribution of different error types.

Comparison with Gold Standard: In certain cases, a gold standard dataset or reference responses may be available for the given conversational scenarios. The generated responses from chatgpt can be compared against these reference responses to identify deviations, errors, and discrepancies. User Feedback and Surveys: Feedback from users who interacted with chatgpt can be collected through surveys or feedback mechanisms. Users can report perceived errors, misunderstandings, or instances where chatgpt failed to provide the desired information. User feedback can provide insights into errors from a user's perspective.

By employing these evaluation techniques and metrics, researchers can comprehensively identify errors made by chatgpt and quantify their impact. The combination of objective metrics and subjective evaluations from human annotators and users helps in understanding the strengths and weaknesses of chatgpt and guides the formulation of error mitigation strategies [7].

Pre-processing steps to ensure data quality and consistency.

To ensure data quality and consistency in the research paper on error analysis and performance enhancement of chatgpt, several pre-processing steps can be undertaken. These steps help in preparing the dataset for accurate error identification and analysis. Here are some common pre-processing steps:

Cleaning and Filtering: Remove any irrelevant or noisy data from the dataset. This can include removing duplicate conversations, eliminating incomplete or corrupted dialogues, and filtering out

irrelevant or off-topic conversations. Ensuring data cleanliness helps maintain the integrity of the dataset.

Tokenization: Tokenize the text data by breaking it down into individual words or subword units. Tokenization is essential for language modelling tasks and helps in subsequent analysis and error identification. It ensures consistency in processing the textual data.

Lowercasing: Convert all text to lowercase to avoid discrepancies due to case sensitivity. Lowercasing helps in ensuring consistent comparison and analysis of text data. However, it is important to consider case sensitivity if it plays a significant role in the error analysis.

Stop word Removal: Eliminate commonly used stop words such as articles, prepositions, and conjunctions that do not carry significant semantic meaning. Stop word removal can reduce noise and focus the analysis on more informative words and phrases.

Lemmatization or Stemming: Apply lemmatization or stemming techniques to normalize words to their base or root forms. This helps in reducing vocabulary size and ensures consistency in word representation, which is important for accurate error identification [8].

Error Annotation Guidelines: Develop clear and comprehensive guidelines for annotating errors in the dataset. These guidelines should provide detailed instructions on how to identify and categorize different types of errors, ensuring consistency among annotators.

Quality Control and Inter-Annotator Agreement: Implement quality control measures to maintain consistency and reliability in error annotations. This can involve having multiple annotators independently label a subset of the data and measuring inter-annotator agreement using metrics such as Cohen's kappa or Fleiss' kappa.

Balancing Error Categories: Ensure that the dataset contains a balanced representation of different error categories. This helps in analysing and addressing errors comprehensively and avoids biases or skewed error distributions.

Data Augmentation: Depending on the size and diversity of the dataset, consider data augmentation techniques to enhance the dataset's coverage of various error types. Augmentation can involve generating additional instances with known error patterns or incorporating diverse user scenarios. By following these pre-processing steps, researchers can ensure data quality and consistency, making the dataset suitable for error analysis and subsequent research. A well-pre-processed dataset lays a solid foundation for accurate error identification, analysis, and the development of effective error mitigation strategies.

Error Analysis

Categorization of common errors made by chatgpt.

Common errors made by chatgpt can be categorized into several broad categories. While the specific errors may vary, depending on the context and data, the following categories provide a general framework for classifying the errors:

Grammatical Errors:

Subject-verb agreement: Errors in matching the subject and verb in terms of number, person, or tense.

Verb conjugation: Incorrectly conjugating verbs or using the wrong verb form. Pronoun errors: Mistakes in pronoun reference, gender agreement, or pronoun case.

Sentence structure: Errors in sentence formation, including missing or misplaced punctuation, incorrect word order, or run-on sentences.

Semantic Errors:

Lexical ambiguity: Misunderstanding or misinterpreting ambiguous words or phrases, leading to incorrect or nonsensical responses.

Word sense disambiguation: Failing to identify the correct meaning of a word with multiple senses in a given context.

Collocation errors: Incorrectly pairing words or phrases that typically occur together, resulting in unnatural or inappropriate language use.

Semantic inconsistency: Producing responses that are inconsistent or contradictory in meaning or logic.

Contextual Errors:

Lack of context awareness: Failing to maintain coherence and understanding of the conversation context across multiple turns.

Misunderstanding context-dependent expressions or idiomatic language: Interpreting context- specific phrases literally or incorrectly, leading to errors in response generation.

Failure to remember previous information: Forgetting or misremembering information provided earlier in the conversation, resulting in confusion or repetitive responses.

Ambiguity Resolution Challenges:

Anaphora resolution: Difficulty in correctly identifying and resolving pronoun references to their antecedents.

Ellipsis resolution: Errors in understanding and completing elliptical or abbreviated sentences or phrases.

Referential ambiguity: Misinterpreting ambiguous references, such as pronouns or demonstrative expressions, leading to incorrect interpretations.

Factual Inaccuracies:

Incorrect information: Providing factually incorrect or inaccurate responses to user queries.

False assumptions: Making incorrect assumptions or presuppositions, leading to errors in generating appropriate responses.

Incomplete or outdated knowledge: Lacking access to the most up-to-date or comprehensive information, resulting in outdated or incomplete responses.

Inappropriate Responses:

Offensive or biased language: Generating responses that include offensive, discriminatory, or biased content.

Inappropriate suggestions: Providing inappropriate or insensitive recommendations, suggestions, or advice.

Lack of empathy or social understanding: Failing to recognize and respond appropriately to user emotions or social cues.

It is important to note that these categories may overlap, and some errors may fall into multiple categories simultaneously. By categorizing the common errors made by chatgpt, researchers can systematically analyze and address these errors to improve the overall performance and reliability of the language model [9].

Causes of Errors

Errors made by chatgpt can be attributed to several underlying causes. Understanding these causes is crucial for identifying the limitations of the model and developing effective strategies to mitigate errors. Here are some common causes of errors in chatgpt:

Insufficient Training Data: Language models like chatgpt are trained on vast amounts of text data, but they may still lack exposure to specific linguistic patterns, domain-specific knowledge,

or rare language constructs. Limited training data can result in errors when the model encounters unfamiliar or uncommon inputs.

Contextual Ambiguity: chatgpt may struggle with resolving ambiguous language constructs or understanding context-dependent information. Ambiguity in pronoun references, ellipsis, or idiomatic expressions can lead to errors in comprehension and response generation.

Lack of Common Sense or World Knowledge: chatgpt may lack comprehensive knowledge about the world, which can result in factual inaccuracies or misunderstandings. Without access to real-time information or specific domain knowledge, the model may provide incorrect or outdated responses.

Biases in Training Data: Language models can inadvertently learn biases present in the training data. This can lead to biased or discriminatory responses, reinforcing stereotypes or exhibiting preferential treatment based on demographic or cultural factors.

Overconfidence or Insufficient Uncertainty Estimation: chatgpt may generate responses with a high degree of confidence even when the correctness of the response is uncertain. Lack of appropriate uncertainty estimation techniques can lead to the model providing incorrect or misleading information without indicating its uncertainty.

Lack of Feedback and Reinforcement Learning: Language models like ChatGPT primarily rely on pre-training and fine-tuning processes. Without continuous feedback from users or explicit reinforcement learning, the model may struggle to correct errors or adapt to specific user requirements.

Data Skewness or Bias: If the training data contains imbalanced representations of certain language patterns or demographics, chatgpt may exhibit biases in its responses. This can result in disproportionate or skewed outputs that do not reflect the diversity of user inputs.

Sensitivity to Input Phrasing: ChatGPT's responses may vary based on slight variations in input phrasing. A change in wording or sentence structure can lead to different responses, including errors or inconsistencies.

Algorithmic Limitations: The underlying algorithms and architectures of language models like chatgpt have inherent limitations. These limitations can include difficulties in long-range dependency modelling, lack of explicit reasoning or inference capabilities, and challenges in retaining and recalling relevant context over multiple turns.

Understanding these causes helps researchers and developers identify areas for improvement and guide the development of techniques to address the identified errors. By addressing these underlying causes, it is possible to enhance the performance and reliability of chatgpt and similar language models [10].

Mitigation Strategies

Mitigating errors in chatgpt requires the implementation of various strategies and techniques. Here are some mitigation strategies that can be explored to enhance the performance and reliability of chatgpt:

Fine-tuning: Fine-tuning involves training the base language model on domain-specific or task- specific data to improve its performance in a specific context. By exposing the model to data relevant to the target application or domain, fine-tuning helps reduce errors and improve response quality.

Reinforcement Learning with User Feedback: Incorporating reinforcement learning techniques enables the model to receive feedback from users or human evaluators. Positive or negative feedback on generated responses can be used to fine-tune the model iteratively, improving its responses over time.

Context-Awareness Enhancements: Enhancing the model's context awareness can help reduce errors caused by misunderstandings or inconsistencies. Techniques such as memory mechanisms, attention mechanisms, or dialogue state tracking can improve the model's ability to retain and utilize contextual information across multiple turns.

Error-Specific Training Data Augmentation: Generating additional training data specifically targeting the identified error categories can help the model learn to handle and mitigate those errors more effectively. This approach involves augmenting the dataset with error patterns, enabling the model to better understand and address specific error types.

Bias Mitigation: Addressing biases in language models is crucial for producing fair and unbiased responses. Techniques such as debiasing methods, data augmentation with diverse perspectives, or carefully curated training data can help reduce biases and promote fair and equitable outputs.

Ethical and Safety Constraints: Implementing ethical and safety constraints within chatgpt can prevent the generation of harmful or inappropriate content. Techniques such as rule-based filtering,

human-in-the-loop systems, or content moderation mechanisms can be employed to ensure that the model adheres to ethical guidelines and produces safe responses.

Active Learning and Data Collection: Active learning techniques involve iteratively selecting and labelling informative instances for training. This approach can be used to collect targeted data that addresses specific error scenarios, facilitating error mitigation and performance improvement.

Multi-Model Ensembles: Combining multiple independently trained models into an ensemble can help reduce errors by leveraging diverse perspectives and combining the strengths of different models. Ensemble methods can improve the robustness and reliability of the system.

User Interface and Interaction Design: Careful design of user interfaces and interaction mechanisms can help mitigate errors by providing clear prompts, suggestions, or clarifications to users. Well-designed interfaces can guide users in providing unambiguous inputs and help manage user expectations.

It is important to note that error mitigation strategies should be implemented while considering the potential trade-offs and ethical considerations. Continual monitoring, evaluation, and user feedback are crucial to assess the effectiveness of the mitigation strategies and identify areas for further improvement.

Discussion:

The research paper focused on analysing and mitigating errors in chatgpt, a conversational AI language model. The findings and proposed strategies provide valuable insights into the limitations of chatgpt and the potential for improving its performance.

The error analysis conducted on chatgpt revealed several common error types, including grammatical errors, semantic errors, contextual errors, and factual inaccuracies. Categorizing these errors helps provide a comprehensive understanding of the model's weaknesses and guides the development of targeted error mitigation strategies.

The experimental results demonstrated the effectiveness of the employed error mitigation strategies in reducing errors and enhancing ChatGPT's performance. This is an important step forward in improving the reliability and utility of conversational AI systems. However, it is important to acknowledge that some error types, such as resolving contextual ambiguities or understanding nuanced language constructs, pose ongoing challenges in natural language processing.

Insights gained from the error analysis provide valuable guidance for future research and development efforts. Insufficient training data, contextual ambiguity, and a lack of common sense or world knowledge were identified as significant contributing factors to errors. These insights can inform the improvement of training methodologies, context modelling techniques, and the integration of external knowledge sources to enhance the performance of chatgpt.

Ethical considerations also emerged as a crucial aspect in error analysis. The presence of biases in ChatGPT's responses necessitates active measures to mitigate them. Incorporating bias detection mechanisms and involving users in providing feedback can help make the model more inclusive, sensitive to ethical considerations, and promote fairness.

The significance of user feedback throughout the error analysis and error mitigation process cannot be overstated. User evaluations and feedback play a vital role in assessing the perceived quality and appropriateness of ChatGPT's responses. Engaging users in the improvement process is essential for uncovering specific error scenarios, addressing edge cases, and continually refining the model's performance.

While the proposed error mitigation strategies yielded positive results, it is important to acknowledge the remaining limitations. Challenges such as handling long-range dependencies, understanding complex reasoning, and generating context-aware responses still require further investigation. Future research should focus on addressing these limitations to enhance the overall performance and reliability of chatgpt.

The practical applications and impact of error analysis and mitigation in chatgpt are significant. By reducing errors, conversational AI systems like chatgpt can provide better user experiences, increase user trust, and find applications in various domains, including customer support, virtual assistance, and critical industries such as healthcare or legal services.

Conclusion

In conclusion, the research paper's analysis of chatgpt errors and proposed mitigation strategies provide valuable insights into the limitations and potential enhancements of conversational AI systems. By addressing the identified challenges and further refining error mitigation techniques, we can make substantial progress in developing more reliable and effective conversational AI models. The findings contribute to the broader goal of advancing natural language processing and improving the performance of AI language models in real-world applications.

References

Clinical Trials and Clinical Research: I am delighted to provide a testimonial for the peer review process, support from the editorial office, and the exceptional quality of the journal for my article entitled “Effect of Traditional Moxibustion in Assisting the Rehabilitation of Stroke Patients.” The peer review process for my article was rigorous and thorough, ensuring that only high-quality research is published in the journal. The reviewers provided valuable feedback and constructive criticism that greatly improved the clarity and scientific rigor of my study. Their expertise and attention to detail helped me refine my research methodology and strengthen the overall impact of my findings. I would also like to express my gratitude for the exceptional support I received from the editorial office throughout the publication process. The editorial team was prompt, professional, and highly responsive to all my queries and concerns. Their guidance and assistance were instrumental in navigating the submission and revision process, making it a seamless and efficient experience. Furthermore, I am impressed by the outstanding quality of the journal itself. The journal’s commitment to publishing cutting-edge research in the field of stroke rehabilitation is evident in the diverse range of articles it features. The journal consistently upholds rigorous scientific standards, ensuring that only the most impactful and innovative studies are published. This commitment to excellence has undoubtedly contributed to the journal’s reputation as a leading platform for stroke rehabilitation research. In conclusion, I am extremely satisfied with the peer review process, the support from the editorial office, and the overall quality of the journal for my article. I wholeheartedly recommend this journal to researchers and clinicians interested in stroke rehabilitation and related fields. The journal’s dedication to scientific rigor, coupled with the exceptional support provided by the editorial office, makes it an invaluable platform for disseminating research and advancing the field.

img

Dr Shiming Tang

Clinical Reviews and Case Reports, The comment form the peer-review were satisfactory. I will cements on the quality of the journal when I receive my hardback copy

img

Hameed khan