diff --git a/development/README.md b/development/README.md index 67e95c2ca2cb29d02287792e228778268a6c811f..35da621eed231fa456f0737971a84b45fd5170d8 100644 --- a/development/README.md +++ b/development/README.md @@ -12,13 +12,14 @@ and debugging Fuchsia and programs running on Fuchsia. covers getting the source, building and running Fuchsia. - [Source code](source_code/README.md) - [Multiple device setup](workflows/multi_device.md) - - [Pushing changes](workflows/package_update.md) + - [Pushing a package](workflows/package_update.md) - [Changes that span layers](workflows/multilayer_changes.md) - [Debugging](workflows/debugging.md) - [Tracing][tracing] - [Trace-based Benchmarking][trace_based_benchmarking] - [Build system](build/README.md) - - [FAQ](/best-practices/faq.md) + - [Workflow FAQ](workflows/workflow_faq.md) + - [Testing FAQ](workflows/testing_faq.md) ## Languages diff --git a/development/languages/c-cpp/testing_faq.md b/development/languages/c-cpp/testing_faq.md new file mode 100644 index 0000000000000000000000000000000000000000..eb306dbd764af32d0b6c0f6271a484fe3bf66ba7 --- /dev/null +++ b/development/languages/c-cpp/testing_faq.md @@ -0,0 +1,18 @@ +# Testing C/C++: Questions and Answers + +You are encouraged to add your own questions (and answers) here! + +[TOC] + +## Q: Do we have Sanitizer support? + +A: This is work in progress (SEC-27). ASAN is the closest to release (just +requires symbolization, TC-21). + +## Q: How do I run with ASAN? + +A: TBD + +## Q: Do we have Fuzzers enabled? + +A: No, sanitizer work takes precedence. Automated fuzz testing is SEC-44. diff --git a/development/workflows/testing_faq.md b/development/workflows/testing_faq.md new file mode 100644 index 0000000000000000000000000000000000000000..cff7cbc6322b81b96f9fd2f8567068670857dbad --- /dev/null +++ b/development/workflows/testing_faq.md @@ -0,0 +1,91 @@ +# Testing: Questions and Answers + +You are encouraged to add your own questions (and answers) here! + +[TOC] + +## Q: How do I define a new unit test? + +A: Use language-appropriate constructs, like GTest for C++. You can define a new +file if need be, such as: + +(in a BUILD.gn file) +```code +executable("unittests") { + output_name = "scenic_unittests" + testonly = true + sources = ["some_test.cc"], + deps = [":some_dep"], +} +``` + +## Q: What ensures it is run? + +A: An unbroken chain of dependencies that roll up to a config file under +`//<layer>/packages/tests/`, such as +[`//garnet/packages/tests/`](https://fuchsia.googlesource.com/garnet/+/master/packages/tests/). + +For example: + +`//garnet/lib/ui/scenic/tests:unittests` + +is an executable, listed under the "tests" stanza of + +`//garnet/bin/ui:scenic_tests` + +which is a package, which is itself listed in the "packages" stanza of + +`//garnet/packages/tests/scenic` + +a file that defines what test binaries go into a system image. + +Think of it as a blueprint file: a (transitive) manifest that details which +tests to try build and run. + +Typically, one just adds a new test to an existing binary, or a new test binary to an existing package. + +## Q: How do I run this unit test on a QEMU instance? + +A: Start a QEMU instance on your workstation, and then *manually* invoke the unit test binary. + +First, start QEMU with `fx run`. + +In the QEMU shell, run `/system/test/scenic_unittests`. The filename is taken +from the value of "output_name" from the executable's build rule. All test +binaries live in the `/system/test` directory. + +Note Well! The files are loaded into the QEMU instance at startup. So after +rebuilding a test, you'll need to shutdown and re-start the QEMU instance to see +the rebuilt test. To exit QEMU, `dm shutdown`. + +## Q: How do I run this unit test on my development device? + +A: Either manual invocation, like in QEMU, **or** `fx run-test` to a running device. + +Note that the booted device may not contain your binary at startup, but `fx +run-test` will build the test binary, ship it over to the device, and run it, +while piping the output back to your workstation terminal. Slick! + +Make sure your device is running (hit Ctrl-D to boot an existing image) and +connected to your workstation. + +From your workstation, `fx run-test scenic_unittests`. The argument to +`run-test` is the name of the binary in `/system/test`. + +## Q: Where are the test results captured? + +A: The output is directed to your terminal. + +There does exist a way to write test output into files (including a summary JSON +file), which is how CQ bots collect the test output for automated runs. + +## Q: How do I run a bunch of tests automatically? How do I ensure all dependencies are tested? + +A: Upload your patch to Gerrit and do a CQ dry run. + +## Q: How do I run this unit test in a CQ dry run? + +A: Clicking on CQ dry run (aka +1) will take a properly defined unit test and +run it on multiple bots, one for each build target (*x86-64* versus *arm64*, *release* +versus *debug*). Each job will have an output page showing all the tests that +ran. diff --git a/best-practices/faq.md b/development/workflows/workflow_faq.md similarity index 58% rename from best-practices/faq.md rename to development/workflows/workflow_faq.md index 238ba654579e82d7529f3fc97ab38a8c978085d6..4acc2b06d096f1cbd345e647991c0f26bf22beed 100644 --- a/best-practices/faq.md +++ b/development/workflows/workflow_faq.md @@ -1,17 +1,15 @@ -# Questions and Answers +# Workflow: Questions and Answers You are encouraged to add your own questions (and answers) here! [TOC] -## Workflow - -### Q: Is there a standard Git workflow for Fuchsia? +## Q: Is there a standard Git workflow for Fuchsia? A: No. Instead, the Git tool offers infinite control and variety for defining your own workflow. Carve out the workflow you need. -#### Rebasing +### Rebasing Update all projects simultaneously, and rebase your work branch on `JIRI_HEAD`: @@ -25,7 +23,7 @@ $ git rebase JIRI_HEAD The `git rebase` to `JIRI_HEAD` should be done in *each* repo where you have ongoing work. It's not needed for repos you haven't touched. -#### Uploading a new patch set (snapshot) of a change +### Uploading a new patch set (snapshot) of a change You'll need to *upload* a patch set to [Gerrit](https://fuchsia-review.googlesource.com/) to have it reviewed by others. We do this with `jiri upload`. @@ -50,7 +48,7 @@ $ git commit -a --amend $ jiri upload ``` -#### Resolving merge conflicts +### Resolving merge conflicts Attempt a rebase: @@ -62,7 +60,7 @@ $ jiri upload But read below about how a `git rebase` can negatively interact with `jiri update`. -#### Stashing +### Stashing You can save all uncommitted changes aside, and re-apply them at a later time. This is often useful when you're starting out with Git. @@ -73,7 +71,7 @@ $ git stash # uncommitted changes will go away $ git stash pop # uncommitted changes will come back ``` -### Q: I use `fx` and `jiri` a lot. How are they related? +## Q: I use **fx** and **jiri** a lot. How are they related? A: [`jiri`](https://fuchsia.googlesource.com/jiri/+/master/) is source management for multiple repositories. @@ -84,7 +82,7 @@ wrapper for configuring and running the build system (Make for Zircon, everything else), as well as facilities to help with day-to-day engineering (`fx boot`, `fx log`, etc). -### Q: Will a git rebase to origin/master mess up my jiri-updated (ie synchronized) view of the repository? +## Q: Will a git rebase to origin/master mess up my jiri-updated (ie synchronized) view of the repository? A: No, if jiri is managing up to the *same layer* as your repository. Possibly yes, if you git rebase a repository that is lower in the layer cake managed by @@ -105,7 +103,7 @@ managed by jiri with `fx set-layer`. If you have a particular commit that you want jiri to honor, download its `jiri.update` file and feed it to `jiri update`. -### Q: What if I need an atomic commit across git repositories? +## Q: What if I need an atomic commit across git repositories? A: Can't, sorry. Try to arrange your CLs to not break each layer during a transition (i.e., do a [soft @@ -131,7 +129,7 @@ Alternatively, you *could* do something as follows: 1. Change `upper` to use the original interface name, now with its new contract. Make any changes required. 1. Delete the clone interface in `lower`. -### Q: How do I do parallel builds from a single set of sources? +## Q: How do I do parallel builds from a single set of sources? A: Currently, this is not possible. The vanilla GN + Ninja workflow should allow this, but `fx` maintains additional global state. @@ -140,107 +138,7 @@ Another slight limitation is that GN files to Zircon are currently being generated and running multiple parallel builds which both try to generate GN files may confuse Ninja. It's unclear whether this is a real issue or not. -### Q: What if I want to build at a previous snapshot across the repos? +## Q: What if I want to build at a previous snapshot across the repos? A: You'll need to `jiri update` against a *jiri snapshot file*, an XML file that captures the state of each repo tracked by jiri. - -## Testing - -### Q: How do I define a new unit test? - -A: Use GTest constructs. You can define a new file if need be, such as: - -(in a BUILD.gn file) -```code -executable("unittests") { - output_name = "scenic_unittests" - testonly = true - sources = ["some_test.cc"], - deps = [":some_dep"], -} -``` - -### Q: What ensures it is run? - -A: An unbroken chain of dependencies that roll up to a config file under -`//<layer>/packages/tests/`, such as -[`//garnet/packages/tests/`](https://fuchsia.googlesource.com/garnet/+/master/packages/tests/). - -For example: - -`//garnet/lib/ui/scenic/tests:unittests` - -is an executable, listed under the "tests" stanza of - -`//garnet/bin/ui:scenic_tests` - -which is a package, which is itself listed in the "packages" stanza of - -`//garnet/packages/tests/scenic` - -a file that defines what test binaries go into a system image. - -Think of it as a blueprint file: a (transitive) manifest that details which -tests to try build and run. - -Typically, one just adds a new test to an existing binary, or a new test binary to an existing package. - -### Q: How do I run this unit test on a QEMU instance? - -A: Start a QEMU instance on your workstation, and then *manually* invoke the unit test binary. - -First, start QEMU with `fx run`. - -In the QEMU shell, run `/system/test/scenic_unittests`. The filename is taken -from the value of "output_name" from the executable's build rule. All test -binaries live in the `/system/test` directory. - -Note Well! The files are loaded into the QEMU instance at startup. So after -rebuilding a test, you'll need to shutdown and re-start the QEMU instance to see -the rebuilt test. To exit QEMU, `dm shutdown`. - -### Q: How do I run this unit test on my development device? - -A: Either manual invocation, like in QEMU, **or** `fx run-test` to a running device. - -Note that the booted device may not contain your binary at startup, but `fx -run-test` will build the test binary, ship it over to the device, and run it, -while piping the output back to your workstation terminal. Slick! - -Make sure your device is running (hit Ctrl-D to boot an existing image) and -connected to your workstation. - -From your workstation, `fx run-test scenic_unittests`. The argument to -`run-test` is the name of the binary in `/system/test`. - -### Q: Where are the test results captured? - -A: The output is directed to your terminal. - -There does exist a way to write test output into files (including a summary JSON -file), which is how CQ bots collect the test output for automated runs. - -### Q: How do I run a bunch of tests automatically? How do I ensure all dependencies are tested? - -A: Upload your patch to Gerrit and do a CQ dry run. - -### Q: How do I run this unit test in a CQ dry run? - -A: Clicking on CQ dry run (aka +1) will take a properly defined unit test and -run it on multiple bots, one for each build target (*x86-64* versus *arm64*, *release* -versus *debug*). Each job will have an output page showing all the tests that -ran. - -### Q: Do we have Sanitizer support? - -A: This is work in progress (SEC-27). ASAN is the closest to release (just -requires symbolization, TC-21). - -### Q: How do I run with ASAN? - -A: TBD - -### Q: Do we have Fuzzers enabled? - -A: No, sanitizer work takes precedence. Automated fuzz testing is SEC-44. diff --git a/getting_started.md b/getting_started.md index f84952f8611dcce8c510e8b3eca180f7596a253b..35028be7462697b57e654d7efa294419a1af85eb 100644 --- a/getting_started.md +++ b/getting_started.md @@ -15,6 +15,8 @@ to work on Zircon only, read and follow Zircon's doc. *** +[TOC] + ## Prerequisites ### Prepare your build environment (Once per build environment) @@ -52,7 +54,7 @@ brew install wget pkg-config glib autoconf automake libtool golang ``` # Install MacPorts -#See https: // guide.macports.org/chunked/installing.macports.html +# See https://guide.macports.org/chunked/installing.macports.html port install autoconf automake libtool libpixman pkgconfig glib2 ``` @@ -227,7 +229,7 @@ If you would like to use a text shell inside a terminal emulator from within the you can launch the [term](https://fuchsia.googlesource.com/topaz/+/master/app/term) by selecting the "Ask Anything" box and typing `moterm`. -### Running tests +## Running tests Compiled test binaries are installed in `/system/test/`. You can run a test by invoking it in the terminal. E.g. @@ -243,6 +245,8 @@ Fuchsia with networking enabled in one terminal, then in another terminal, run: fx run-test <test name> [<test args>] ``` +You may wish to peruse the [testing FAQ](development/workflows/testing_faq.md). + ## Contribute changes * See [CONTRIBUTING.md](CONTRIBUTING.md). @@ -250,7 +254,6 @@ fx run-test <test name> [<test args>] ## Additional helpful documents * [Fuchsia documentation](/README.md) hub -* [Fuchsia FAQ](/best-practices/faq.md) * Working with Zircon - [copying files, network booting, log viewing, and more](https://fuchsia.googlesource.com/zircon/+/master/docs/getting_started.md#Copying-files-to-and-from-Zircon) * [Information on the system bootstrap application](https://fuchsia.googlesource.com/garnet/+/master/bin/sysmgr/).