Here we collect sources that identify, define, classify, and evaluate research techniques in ways that may help design the pattern language. These are not themselves patterns; indeed, many of them lay out top-down guidance rather than working from examples found in the wild. They do, however, provide advice about what to look for in the actual examples.
We recognize the tension between the pattern philosophy of identifying patterns from examples found in the wild and the documents that provide guidance that may be idealized from the wild. Some of the resources here do draw on concrete examples, thereby falling in the middle ground.
Within Software Engineering
Description and comparison of different types of papers
“Writing Good Software Engineering Research Papers” [Shaw, 2003] Here called the ICSE2002 study. This paper identified the types of problems, solutions, and validations in papers submitted to ICSE2002, reported the relative proportions in submitted and accepted papers, and identified the combinations that were most successful. This experience strongly suggests that a pattern language would be a good way to describe successful structures for scientific arguments and hence for papers.
Specific types of research
Jim Herbsleb’s desiderata for reading and evaluating an empirical paper 
Handout on research methods from USER 2012 (Workshop on User Evaluation for Software Engineering) 
“Requirements engineering paper classification and evaluation criteria” [Wieringa, Maiden, Mead, Rolland, 2006]
“Systematic literature reviews in software engineering — a systematic literature review” [Kitchenham, Brereton, Budgen, Turner, Bailey, Linkman, 2009]
“Guidelines for conducting and reporting case study research in software engineering” [Runeson, Höst, 2009]. This paper surveys the numerous classifications of empirical research, which make distinctions along several axes. This introductory material can help guide the organization of empirical patterns, so this resource is useful beyond the specific method in its title.
“Preliminary Guidelines for Empirical Research in Software Engineering” [Kitchenham, Pfleeger, Pickard, Jones, Hoaglin, El-Emam, Rosenberg, 2001]. Another resource that covers a wide swath of empirical research. This one contains specific guidelines — this may be helpful in formulating patterns.
The ICSE 2015 call for papers asked authors to identify the categories (analytical, empirical, technological, methodological, perspectives) of their papers as well as the topic area; the description of each category includes a sentence about evaluation criteria. The co-chairs’ report shows how the papers were distributed among these categories and comments on their use
General advice and advice on the research process
Nick Feamster recognizes patterns in the research process, activities like finding a problem by hopping on a trend, developing your secret weapon, revisiting old problems, looking for pain points, etc. These are interesting but largely complementary to the patterns here, which are about the structure of the papers that present scientific results (and presumably about the structure of the results themselves).
A page of advice from Grigori Melnik at Microsoft
Outside Software Engineering
In “A Preliminary Analysis of the Products of HCI Research, Using Pro Forma Abstracts“, Newman  explored the question of whether Human-Computer Interaction is a form of engineering by reading a large collection of engineering papers and creating three “pro forma abstracts” that sufficed to explain most. He created two more such abstracts to extend coverage in HCI. These are essentially patterns.
The Journal of Social Psychological and Personality Science is introducing a set of standards that are probably applicable to much of our empirical research. They introduced these in an editorial on best practices. Here is the nugget (image needs to be fixed):
PeerJ, an open access journal, has an extensive list of standards that should be followed for papers of many types in biological and medical sciences. These largely reference standards of the field, not just of this journal.
Evidence-Based Medicine recognizes that not all evidence is of the same quality, and that the quality of evidence drives the strength of the conclusions or recommendations you can obtain. The oxford Center for Evidence-Based Medicine offers one version of the explicit definitions of levels of evidence and grades of recommendations
An overview of basic research concepts by Oskar Blakstad inventories research methods from the standpoint of (apparently) the life sciences.
[[recent stuff about significance and overfitting in science generally. Christie Aschwanden wrote a really nice piece on the practical difficulties of doing replicatable science, especially p-hacking and the subtleties of pinning down your problem definition; it includes an interactive demonstration of p-hacking.]]
Demonstrating bad journalism about bad science by example, Bohannon does an unsound nutritional study and gets lots of press 
Summary of Carl Sagan’s “Fine Art of Baloney Detection“, in review of The Demon-Haunted World.